Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Dis Colon Rectum ; 2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-38959453

RESUMEN

BACKGROUND: Iatrogenic ureteral injury is a serious complication of abdominopelvic surgery. Identifying the ureters intraoperatively is essential to avoid iatrogenic ureteral injury. Here, we developed a model that may minimize this complication. IMPACT OF INNOVATION: We applied a deep learning-based semantic segmentation algorithm to the ureter recognition task and developed a deep learning model called UreterNet. This study aimed to verify whether the ureters could be identified in videos of laparoscopic colorectal surgery. TECHNOLOGY MATERIALS AND METHODS: Semantic segmentation of the ureter area was performed using a convolutional neural network-based approach. Feature Pyramid Networks were used as the convolutional neural network architecture for semantic segmentation. Precision, recall, and the dice coefficient were used as the evaluation metrics in this study. PRELIMINARY RESULTS: We created 14,069 annotated images from 304 videos, with 9537, 2266, and 2266 images in the training, validation, and test datasets, respectively. Concerning ureter recognition performance, precision, recall, and the Dice coefficient for the test data were 0.712, 0.722, and 0.716, respectively. Regarding the real-time performance on recorded videos, it took 71 ms for UreterNet to infer all pixels corresponding to the ureter from a single still image and 143 ms to output and display the inferred results as a segmentation mask on the laparoscopic monitor. CONCLUSIONS AND FUTURE DIRECTIONS: UreterNet is a noninvasive method for identifying the ureter in videos of laparoscopic colorectal surgery and can potentially improve surgical safety. Although this could lead to the development of an image-navigated surgical system, it is necessary to verify whether UreterNet reduces the occurrence of iatrogenic ureteral injury.

2.
Gastric Cancer ; 27(1): 187-196, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38038811

RESUMEN

BACKGROUND: Gastric surgery involves numerous surgical phases; however, its steps can be clearly defined. Deep learning-based surgical phase recognition can promote stylization of gastric surgery with applications in automatic surgical skill assessment. This study aimed to develop a deep learning-based surgical phase-recognition model using multicenter videos of laparoscopic distal gastrectomy, and examine the feasibility of automatic surgical skill assessment using the developed model. METHODS: Surgical videos from 20 hospitals were used. Laparoscopic distal gastrectomy was defined and annotated into nine phases and a deep learning-based image classification model was developed for phase recognition. We examined whether the developed model's output, including the number of frames in each phase and the adequacy of the surgical field development during the phase of supra-pancreatic lymphadenectomy, correlated with the manually assigned skill assessment score. RESULTS: The overall accuracy of phase recognition was 88.8%. Regarding surgical skill assessment based on the number of frames during the phases of lymphadenectomy of the left greater curvature and reconstruction, the number of frames in the high-score group were significantly less than those in the low-score group (829 vs. 1,152, P < 0.01; 1,208 vs. 1,586, P = 0.01, respectively). The output score of the adequacy of the surgical field development, which is the developed model's output, was significantly higher in the high-score group than that in the low-score group (0.975 vs. 0.970, P = 0.04). CONCLUSION: The developed model had high accuracy in phase-recognition tasks and has the potential for application in automatic surgical skill assessment systems.


Asunto(s)
Laparoscopía , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/cirugía , Laparoscopía/métodos , Gastroenterostomía , Gastrectomía/métodos
3.
Surg Endosc ; 38(1): 171-178, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37950028

RESUMEN

BACKGROUND: In laparoscopic right hemicolectomy (RHC) for right-sided colon cancer, accurate recognition of the vascular anatomy is required for appropriate lymph node harvesting and safe operative procedures. We aimed to develop a deep learning model that enables the automatic recognition and visualization of major blood vessels in laparoscopic RHC. MATERIALS AND METHODS: This was a single-institution retrospective feasibility study. Semantic segmentation of three vessel areas, including the superior mesenteric vein (SMV), ileocolic artery (ICA), and ileocolic vein (ICV), was performed using the developed deep learning model. The Dice coefficient, recall, and precision were utilized as evaluation metrics to quantify the model performance after fivefold cross-validation. The model was further qualitatively appraised by 13 surgeons, based on a grading rubric to assess its potential for clinical application. RESULTS: In total, 2624 images were extracted from 104 laparoscopic colectomy for right-sided colon cancer videos, and the pixels corresponding to the SMV, ICA, and ICV were manually annotated and utilized as training data. SMV recognition was the most accurate, with all three evaluation metrics having values above 0.75, whereas the recognition accuracy of ICA and ICV ranged from 0.53 to 0.57 for the three evaluation metrics. Additionally, all 13 surgeons gave acceptable ratings for the possibility of clinical application in rubric-based quantitative evaluations. CONCLUSION: We developed a DL-based vessel segmentation model capable of achieving feasible identification and visualization of major blood vessels in association with RHC. This model may be used by surgeons to accomplish reliable navigation of vessel visualization.


Asunto(s)
Neoplasias del Colon , Aprendizaje Profundo , Laparoscopía , Humanos , Neoplasias del Colon/diagnóstico por imagen , Neoplasias del Colon/cirugía , Neoplasias del Colon/irrigación sanguínea , Estudios Retrospectivos , Laparoscopía/métodos , Colectomía/métodos
4.
Surg Endosc ; 38(2): 1088-1095, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38216749

RESUMEN

BACKGROUND: The precise recognition of liver vessels during liver parenchymal dissection is the crucial technique for laparoscopic liver resection (LLR). This retrospective feasibility study aimed to develop artificial intelligence (AI) models to recognize liver vessels in LLR, and to evaluate their accuracy and real-time performance. METHODS: Images from LLR videos were extracted, and the hepatic veins and Glissonean pedicles were labeled separately. Two AI models were developed to recognize liver vessels: the "2-class model" which recognized both hepatic veins and Glissonean pedicles as equivalent vessels and distinguished them from the background class, and the "3-class model" which recognized them all separately. The Feature Pyramid Network was used as a neural network architecture for both models in their semantic segmentation tasks. The models were evaluated using fivefold cross-validation tests, and the Dice coefficient (DC) was used as an evaluation metric. Ten gastroenterological surgeons also evaluated the models qualitatively through rubric. RESULTS: In total, 2421 frames from 48 video clips were extracted. The mean DC value of the 2-class model was 0.789, with a processing speed of 0.094 s. The mean DC values for the hepatic vein and the Glissonean pedicle in the 3-class model were 0.631 and 0.482, respectively. The average processing time for the 3-class model was 0.097 s. Qualitative evaluation by surgeons revealed that false-negative and false-positive ratings in the 2-class model averaged 4.40 and 3.46, respectively, on a five-point scale, while the false-negative, false-positive, and vessel differentiation ratings in the 3-class model averaged 4.36, 3.44, and 3.28, respectively, on a five-point scale. CONCLUSION: We successfully developed deep-learning models that recognize liver vessels in LLR with high accuracy and sufficient processing speed. These findings suggest the potential of a new real-time automated navigation system for LLR.


Asunto(s)
Inteligencia Artificial , Laparoscopía , Humanos , Estudios Retrospectivos , Hígado/diagnóstico por imagen , Hígado/cirugía , Hígado/irrigación sanguínea , Hepatectomía/métodos , Laparoscopía/métodos
5.
Surg Today ; 2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38740574

RESUMEN

The sigmoid colon simulator was designed to accurately reproduce the anatomical layer structure and the arrangement of characteristic organs in each layer, and to have conductivity so that energy devices can be used. Dry polyester fibers were used to reproduce the layered structures, which included characteristic blood vessels, nerve sheaths, and intestinal tracts. The adhesive strength of the layers was controlled to allow realistic peeling techniques. The features of the Sigmaster are illustrated through a comparison of simulated sigmoidectomy using Sigmaster and actual surgery. We developed a laparoscopic sigmoidectomy simulator called Sigmaster. Sigmaster is a training device that closely reproduces the membrane structures of the human body and allows surgeons to experience the entire laparoscopic sigmoidectomy process.

6.
Ann Surg ; 278(2): e250-e255, 2023 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-36250677

RESUMEN

OBJECTIVE: To develop a machine learning model that automatically quantifies the spread of blood in the surgical field using intraoperative videos of laparoscopic colorectal surgery and evaluate whether the index measured with the developed model can be used to assess tissue handling skill. BACKGROUND: Although skill evaluation is crucial in laparoscopic surgery, existing evaluation systems suffer from evaluator subjectivity and are labor-intensive. Therefore, automatic evaluation using machine learning is potentially useful. MATERIALS AND METHODS: In this retrospective experimental study, we used training data with annotated labels of blood or non-blood pixels on intraoperative images to develop a machine learning model to classify pixel RGB values into blood and non-blood. The blood pixel count per frame (the total number of blood pixels throughout a surgery divided by the number of frames) was compared among groups of surgeons with different tissue handling skills. RESULTS: The overall accuracy of the machine learning model for the blood classification task was 85.7%. The high tissue handling skill group had the lowest blood pixel count per frame, and the novice surgeon group had the highest count (mean [SD]: high tissue handling skill group 20972.23 [19287.05] vs. low tissue handling skill group 34473.42 [28144.29] vs. novice surgeon group 50630.04 [42427.76], P <0.01). The difference between any 2 groups was significant. CONCLUSIONS: We developed a machine learning model to measure blood pixels in laparoscopic colorectal surgery images using RGB information. The blood pixel count per frame measured with this model significantly correlated with surgeons' tissue handling skills.


Asunto(s)
Cirugía Colorrectal , Laparoscopía , Humanos , Estudios Retrospectivos , Competencia Clínica , Laparoscopía/métodos , Aprendizaje Automático
7.
Surg Endosc ; 37(2): 835-845, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36097096

RESUMEN

BACKGROUND: Prioritizing patient health is essential, and given the risk of mortality, surgical techniques should be objectively evaluated. However, there is no comprehensive cross-disciplinary system that evaluates skills across all aspects among surgeons of varying levels. Therefore, this study aimed to uncover universal surgical competencies by decomposing and reconstructing specific descriptions in operative performance assessment tools, as the basis of building automated evaluation system using computer vision and machine learning-based analysis. METHODS: The study participants were primarily expert surgeons in the gastrointestinal surgery field and the methodology comprised data collection, thematic analysis, and validation. For the data collection, participants identified global operative performance assessment tools according to detailed inclusion and exclusion criteria. Thereafter, thematic analysis was used to conduct detailed analyses of the descriptions in the tools where specific rules were coded, integrated, and discussed to obtain high-level concepts, namely, "Skill meta-competencies." "Skill meta-competencies" was recategorized for data validation and reliability assurance. Nine assessment tools were selected based on participant criteria. RESULTS: In total, 189 types of skill performances were extracted from the nine tool descriptions and organized into the following five competencies: (1) Tissue handling, (2) Psychomotor skill, (3) Efficiency, (4) Dissection quality, and (5) Exposure quality. The evolutionary importance of these competences' different evaluation targets and purpose over time were assessed; the results showed relatively high reliability, indicating that the categorization was reproducible. The inclusion of basic (tissue handling, psychomotor skill, and efficiency) and advanced (dissection quality and exposure quality) skills in these competencies enhanced the tools' comprehensiveness. CONCLUSIONS: The competencies identified to help surgeons formalize and implement tacit knowledge of operative performance are highly reproducible. These results can be used to form the basis of an automated skill evaluation system and help surgeons improve the provision of care and training, consequently, improving patient prognosis.


Asunto(s)
Internado y Residencia , Cirujanos , Humanos , Reproducibilidad de los Resultados , Evaluación Educacional , Recolección de Datos , Competencia Clínica
8.
BMC Surg ; 23(1): 121, 2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37170107

RESUMEN

BACKGROUND: Anastomotic leakage has been reported to occur when the load on the anastomotic site exceeds the resistance created by sutures, staples, and early scars. It may be possible to avoid anastomotic leakage by covering and reinforcing the anastomotic site with a biocompatible material. The aim of this study was to evaluate the safety and feasibility of a novel external reinforcement device for gastrointestinal anastomosis in an experimental model. METHODS: A single pig was used in this non-survival study, and end-to-end anastomoses were created in six small bowel loops by a single-stapling technique using a circular stapler. Three of the six anastomoses were covered with a novel external reinforcement device. Air was injected, a pressure test of each anastomosis was performed, and the bursting pressure was measured. RESULTS: Reinforcement of the anastomotic site with the device was successfully performed in all anastomoses. The bursting pressure was 76.1 ± 5.7 mmHg in the control group, and 126.8 ± 6.8 mmHg in the device group, respectively. The bursting pressure in the device group was significantly higher than that in the control group (p = 0.0006). CONCLUSIONS: The novel external reinforcement device was safe and feasible for reinforcing the anastomoses in the experimental model.


Asunto(s)
Fuga Anastomótica , Intestino Delgado , Porcinos , Animales , Fuga Anastomótica/prevención & control , Fuga Anastomótica/cirugía , Anastomosis Quirúrgica/métodos , Intestino Delgado/cirugía , Grapado Quirúrgico/métodos , Cicatriz
9.
Dis Colon Rectum ; 65(5): e329-e333, 2022 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-35170546

RESUMEN

BACKGROUND: Total mesorectal excision is the standard surgical procedure for rectal cancer because it is associated with low local recurrence rates. To the best of our knowledge, this is the first study to use an image-guided navigation system with total mesorectal excision. IMPACT OF INNOVATION: The impact of innovation is the development of a deep learning-based image-guided navigation system for areolar tissue in the total mesorectal excision plane. Such a system might be helpful to surgeons because areolar tissue can be used as a landmark for the appropriate dissection plane. TECHNOLOGY, MATERIALS, AND METHODS: This was a single-center experimental feasibility study involving 32 randomly selected patients who had undergone laparoscopic left-sided colorectal resection between 2015 and 2019. Deep learning-based semantic segmentation of areolar tissue in the total mesorectal excision plane was performed. Intraoperative images capturing the total mesorectal excision scene extracted from left colorectal laparoscopic resection videos were used as training data for the deep learning model. Six hundred annotation images were created from 32 videos, with 528 images in the training and 72 images in the test data sets. The experimental feasibility study was conducted at the Department of Colorectal Surgery, National Cancer Center Hospital East, Chiba, Japan. Dice coefficient was used to evaluate semantic segmentation accuracy for areolar tissue. PRELIMINARY RESULTS: The developed semantic segmentation model helped locate and highlight the areolar tissue area in the total mesorectal excision plane. The accuracy and generalization performance of deep learning models depend mainly on the quantity and quality of the training data. This study had only 600 images; thus, more images for training are necessary to improve the recognition accuracy. CONCLUSION AND FUTURE DIRECTIONS: We successfully developed a total mesorectal excision plane image-guided navigation system based on an areolar tissue segmentation approach with high accuracy. This may aid surgeons in recognizing the total mesorectal excision plane for dissection.


Asunto(s)
Cirugía Colorrectal , Procedimientos Quirúrgicos del Sistema Digestivo , Laparoscopía , Neoplasias del Recto , Inteligencia Artificial , Humanos , Laparoscopía/métodos , Neoplasias del Recto/diagnóstico por imagen , Neoplasias del Recto/cirugía , Recto/diagnóstico por imagen , Recto/cirugía
10.
Surg Endosc ; 36(8): 6105-6112, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35764837

RESUMEN

BACKGROUND: Recognition of the inferior mesenteric artery (IMA) during colorectal cancer surgery is crucial to avoid intraoperative hemorrhage and define the appropriate lymph node dissection line. This retrospective feasibility study aimed to develop an IMA anatomical recognition model for laparoscopic colorectal resection using deep learning, and to evaluate its recognition accuracy and real-time performance. METHODS: A complete multi-institutional surgical video database, LapSig300 was used for this study. Intraoperative videos of 60 patients who underwent laparoscopic sigmoid colon resection or high anterior resection were randomly extracted from the database and included. Deep learning-based semantic segmentation accuracy and real-time performance of the developed IMA recognition model were evaluated using Dice similarity coefficient (DSC) and frames per second (FPS), respectively. RESULTS: In a fivefold cross-validation conducted using 1200 annotated images for the IMA semantic segmentation task, the mean DSC value was 0.798 (± 0.0161 SD) and the maximum DSC was 0.816. The proposed deep learning model operated at a speed of over 12 FPS. CONCLUSION: To the best of our knowledge, this is the first study to evaluate the feasibility of real-time vascular anatomical navigation during laparoscopic colorectal surgery using a deep learning-based semantic segmentation approach. This experimental study was conducted to confirm the feasibility of our model; therefore, its safety and usefulness were not verified in clinical practice. However, the proposed deep learning model demonstrated a relatively high accuracy in recognizing IMA in intraoperative images. The proposed approach has potential application in image navigation systems for unfixed soft tissues and organs during various laparoscopic surgeries.


Asunto(s)
Laparoscopía , Arteria Mesentérica Inferior , Colon Sigmoide/irrigación sanguínea , Humanos , Procesamiento de Imagen Asistido por Computador , Laparoscopía/métodos , Escisión del Ganglio Linfático/métodos , Arteria Mesentérica Inferior/cirugía , Estudios Retrospectivos
11.
Surg Endosc ; 36(2): 1143-1151, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-33825016

RESUMEN

BACKGROUND: Dividing a surgical procedure into a sequence of identifiable and meaningful steps facilitates intraoperative video data acquisition and storage. These efforts are especially valuable for technically challenging procedures that require intraoperative video analysis, such as transanal total mesorectal excision (TaTME); however, manual video indexing is time-consuming. Thus, in this study, we constructed an annotated video dataset for TaTME with surgical step information and evaluated the performance of a deep learning model in recognizing the surgical steps in TaTME. METHODS: This was a single-institutional retrospective feasibility study. All TaTME intraoperative videos were divided into frames. Each frame was manually annotated as one of the following major steps: (1) purse-string closure; (2) full thickness transection of the rectal wall; (3) down-to-up dissection; (4) dissection after rendezvous; and (5) purse-string suture for stapled anastomosis. Steps 3 and 4 were each further classified into four sub-steps, specifically, for dissection of the anterior, posterior, right, and left planes. A convolutional neural network-based deep learning model, Xception, was utilized for the surgical step classification task. RESULTS: Our dataset containing 50 TaTME videos was randomly divided into two subsets for training and testing with 40 and 10 videos, respectively. The overall accuracy obtained for all classification steps was 93.2%. By contrast, when sub-step classification was included in the performance analysis, a mean accuracy (± standard deviation) of 78% (± 5%), with a maximum accuracy of 85%, was obtained. CONCLUSIONS: To the best of our knowledge, this is the first study based on automatic surgical step classification for TaTME. Our deep learning model self-learned and recognized the classification steps in TaTME videos with high accuracy after training. Thus, our model can be applied to a system for intraoperative guidance or for postoperative video indexing and analysis in TaTME procedures.


Asunto(s)
Aprendizaje Profundo , Laparoscopía , Proctectomía , Neoplasias del Recto , Cirugía Endoscópica Transanal , Humanos , Laparoscopía/métodos , Complicaciones Posoperatorias/cirugía , Proctectomía/educación , Neoplasias del Recto/cirugía , Recto/cirugía , Estudios Retrospectivos , Cirugía Endoscópica Transanal/métodos
12.
Surg Endosc ; 36(7): 5531-5539, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35476155

RESUMEN

BACKGROUND: Artificial intelligence (AI) has been largely investigated in the field of surgery, particularly in quality assurance. However, AI-guided navigation during surgery has not yet been put into practice because a sufficient level of performance has not been reached. We aimed to develop deep learning-based AI image processing software to identify the location of the recurrent laryngeal nerve during thoracoscopic esophagectomy and determine whether the incidence of recurrent laryngeal nerve paralysis is reduced using this software. METHODS: More than 3000 images extracted from 20 thoracoscopic esophagectomy videos and 40 images extracted from 8 thoracoscopic esophagectomy videos were annotated for identification of the recurrent laryngeal nerve. The Dice coefficient was used to assess the detection performance of the model and that of surgeons (specialized esophageal surgeons and certified general gastrointestinal surgeons). The performance was compared using a test set. RESULTS: The average Dice coefficient of the AI model was 0.58. This was not significantly different from the Dice coefficient of the group of specialized esophageal surgeons (P = 0.26); however, it was significantly higher than that of the group of certified general gastrointestinal surgeons (P = 0.019). CONCLUSIONS: Our software's performance in identification of the recurrent laryngeal nerve was superior to that of general surgeons and almost reached that of specialized surgeons. Our software provides real-time identification and will be useful for thoracoscopic esophagectomy after further developments.


Asunto(s)
Neoplasias Esofágicas , Esofagectomía , Inteligencia Artificial , Neoplasias Esofágicas/cirugía , Esofagectomía/métodos , Humanos , Escisión del Ganglio Linfático/métodos , Nervio Laríngeo Recurrente/cirugía , Estudios Retrospectivos
13.
BMC Surg ; 22(1): 12, 2022 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-34998376

RESUMEN

BACKGROUND: Mastery of technical skills is one of the fundamental goals of surgical training for novices. Meanwhile, performing laparoscopic procedures requires exceptional surgical skills compared to open surgery. However, it is often difficult for trainees to learn through observation and practice only. Virtual reality (VR)-based surgical simulation is expanding and rapidly advancing. A major obstacle for laparoscopic trainees is the difficulty of well-performed dissection. Therefore, we developed a new VR simulation system, Lap-PASS LP-100, which focuses on training to create proper tension on the tissue in laparoscopic sigmoid colectomy dissection. This study aimed to validate this new VR simulation system. METHODS: A total of 50 participants were asked to perform medial dissection of the meso-sigmoid colon on the VR simulator. Forty-four surgeons and six non-medical professionals working in the National Cancer Center Hospital East, Japan, were enrolled in this study. The surgeons were: laparoscopic surgery experts with > 100 laparoscopic surgeries (LS), 21 were novices with experience < 100 LS, and five without previous experience in LS. The participants' surgical performance was evaluated by three blinded raters using Global Operative Assessment of Laparoscopic Skills (GOALS). RESULTS: There were significant differences (P-values < 0.044) in all GOALS items between the non-medical professionals and surgeons. The experts were significantly superior to the novices in one item of GOALS: efficiency ([4(4-5) vs. 4(3-4)], with a 95% confidence interval, p = 0.042). However, both bimanual dexterity and total score in the experts were not statistically different but tended to be higher than in the novices. CONCLUSIONS: Our study demonstrated a full validation of our new system. This could detect the surgeons' ability to perform surgical dissection and suggest that this VR simulator could be an effective training tool. This surgical VR simulator might have tremendous potential to enhance training for surgeons.


Asunto(s)
Laparoscopía , Entrenamiento Simulado , Realidad Virtual , Competencia Clínica , Colectomía , Colon Sigmoide , Simulación por Computador , Disección , Humanos , Interfaz Usuario-Computador
14.
Dig Endosc ; 34(5): 1021-1029, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34748658

RESUMEN

BACKGROUND: Artificial intelligence (AI) has made considerable progress in image recognition, especially in the analysis of endoscopic images. The availability of large-scale annotated datasets has contributed to the recent progress in this field. Datasets of high-quality annotated endoscopic images are widely available, particularly in Japan. A system for collecting annotated data reported daily could aid in accumulating a significant number of high-quality annotated datasets. AIM: We assessed the validity of using daily annotated endoscopic images in a constructed reporting system for a prototype AI model for polyp detection. METHODS: We constructed an automated collection system for daily annotated datasets from an endoscopy reporting system. The key images were selected and annotated for each case only during daily practice, not to be performed retrospectively. We automatically extracted annotated endoscopic images of diminutive colon polyps that had been diagnosed (study period March-September 2018) using the keywords of diagnostic information, and additionally collect the normal colon images. The collected dataset was devised into training and validation to build and evaluate the AI system. The detection model was developed using a deep learning algorithm, RetinaNet. RESULTS: The automated system collected endoscopic images (47,391) from colonoscopies (745), and extracted key colon polyp images (1356) with localized annotations. The sensitivity, specificity, and accuracy of our AI model were 97.0%, 97.7%, and 97.3% (n = 300), respectively. CONCLUSION: The automated system enabled the development of a high-performance colon polyp detector using images in endoscopy reporting system without the efforts of retrospective annotation works.


Asunto(s)
Inteligencia Artificial , Pólipos del Colon , Colon , Pólipos del Colon/diagnóstico por imagen , Colonoscopía/métodos , Humanos , Estudios Retrospectivos
15.
BMC Gastroenterol ; 21(1): 234, 2021 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-34022798

RESUMEN

BACKGROUND: The Cryoballoon focal ablation system (CbFAS) for dysplastic Barrett's esophagus is simple, time-saving and has high therapeutic efficacy. This study aimed to evaluate the technical feasibility and tissue damage with combination therapy of endoscopic resection (ER) and CbFAS in porcine models. METHODS: Three pigs (A, B, and C) were included, and all ER procedures were performed by endoscopic mucosal resection using the Cap method (EMR). Combination therapy for each pig was performed as follows: (a) CbFAS was performed for a post-EMR mucosal defect for Pig A; (b) CbFAS for post-EMR scar for Pig B, and (c) EMR for post-CbFAS scar for Pig C. All pigs were euthanized at 32 days after the initial procedure, and the tissue damage was evaluated. RESULTS: All endoscopic procedures were followed as scheduled. None of the subjects experienced anorexia, rapid weight loss, bleeding, and perforation during the observation period. They were euthanized at 32 days after the initial endoscopic procedure. On histological assessment, there was little difference between the tissue that was treated with CbFAS alone and that treated with CbFAS in combination with ER. CONCLUSION: Combination therapy with ER and CbFAS can be technically feasible, and its outcome was not significantly different from CbFAS alone in terms of tissue damage.


Asunto(s)
Esófago de Barrett , Criocirugía , Resección Endoscópica de la Mucosa , Neoplasias Esofágicas , Animales , Esófago de Barrett/cirugía , Neoplasias Esofágicas/cirugía , Esofagoscopía , Porcinos , Resultado del Tratamiento
16.
Surg Endosc ; 35(6): 2493-2499, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-32430531

RESUMEN

BACKGROUND: Urethral injuries (UIs) are significant complications pertaining to transanal total mesorectal excision (TaTME). It is important for surgeons to identify the prostate during TaTME to prevent UI occurrence; intraoperative image navigation could be considered useful in this regard. This study aims at developing a deep learning model for real-time automatic prostate segmentation based on intraoperative video during TaTME. The proposed model's performance has been evaluated. METHODS: This was a single-institution retrospective feasibility study. Semantic segmentation of the prostate area was performed using a convolutional neural network (CNN)-based approach. DeepLab v3 plus was utilized as the CNN model for the semantic segmentation task. The Dice coefficient (DC), which is calculated based on the overlapping area between the ground truth and predicted area, was utilized as an evaluation metric for the proposed model. RESULTS: Five hundred prostate images were randomly extracted from 17 TaTME videos, and the prostate area was manually annotated on each image. Fivefold cross-validation tests were performed, and as observed, the average DC value equaled 0.71 ± 0.04, the maximum value being 0.77. Additionally, the model operated at 11 fps, which provides acceptable real-time performance. CONCLUSIONS: To the best of the authors' knowledge, this is the first effort toward realization of computer-assisted TaTME, and results obtained in this study suggest that the proposed deep learning model can be utilized for real-time automatic prostate segmentation. In future endeavors, the accuracy and performance of the proposed model will be improved to enable its use in practical applications, and its capability to reduce UI risks during TaTME will be verified.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Próstata , Computadores , Estudios de Factibilidad , Humanos , Masculino , Próstata/diagnóstico por imagen , Próstata/cirugía , Estudios Retrospectivos
17.
Surg Endosc ; 34(11): 4924-4931, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-31797047

RESUMEN

BACKGROUND: Automatic surgical workflow recognition is a key component for developing the context-aware computer-assisted surgery (CA-CAS) systems. However, automatic surgical phase recognition focused on colorectal surgery has not been reported. We aimed to develop a deep learning model for automatic surgical phase recognition based on laparoscopic sigmoidectomy (Lap-S) videos, which could be used for real-time phase recognition, and to clarify the accuracies of the automatic surgical phase and action recognitions using visual information. METHODS: The dataset used contained 71 cases of Lap-S. The video data were divided into frame units every 1/30 s as static images. Every Lap-S video was manually divided into 11 surgical phases (Phases 0-10) and manually annotated for each surgical action on every frame. The model was generated based on the training data. Validation of the model was performed on a set of unseen test data. Convolutional neural network (CNN)-based deep learning was also used. RESULTS: The average surgical time was 175 min (± 43 min SD), with the individual surgical phases also showing high variations in the duration between cases. Each surgery started in the first phase (Phase 0) and ended in the last phase (Phase 10), and phase transitions occurred 14 (± 2 SD) times per procedure on an average. The accuracy of the automatic surgical phase recognition was 91.9% and those for the automatic surgical action recognition of extracorporeal action and irrigation were 89.4% and 82.5%, respectively. Moreover, this system could perform real-time automatic surgical phase recognition at 32 fps. CONCLUSIONS: The CNN-based deep learning approach enabled the recognition of surgical phases and actions in 71 Lap-S cases based on manually annotated data. This system could perform automatic surgical phase recognition and automatic target surgical action recognition with high accuracy. Moreover, this study showed the feasibility of real-time automatic surgical phase recognition with high frame rate.


Asunto(s)
Colectomía/métodos , Colon Sigmoide/cirugía , Aprendizaje Profundo , Laparoscopía/métodos , Cirugía Asistida por Computador/métodos , Sistemas de Computación , Humanos , Tempo Operativo , Estudios Retrospectivos , Flujo de Trabajo
18.
BMC Surg ; 20(1): 250, 2020 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-33092548

RESUMEN

BACKGROUND: Establishing anastomotic integrity is crucial for avoiding anastomotic complications in colorectal surgery. This study aimed to evaluate the safety and feasibility of assessing anastomotic integrity using novel oxygen saturation imaging endoscopy in a porcine ischemia model. METHODS: In three pigs, a new endoscope system was used to check the mechanical completeness of the anastomosis and capture the tissue oxygen saturation (StO2) images. This technology can derive the StO2 images from the differences in the absorption coefficient in the visible light region between oxy- and deoxy-hemoglobin. Bowel perfusion at the proximal rectum was assessed before and after the anastomosis, and 1 min and 30 min after the ligation of the cranial rectal artery (CRA). RESULTS: The completeness of the anastomoses was confirmed by the absence of air leakage. Intraluminal oxygen saturation imaging was successfully performed in all animals. There was no significant difference in the StO2 level before and after the anastomosis (52.6 ± 2.0 vs. 52.0 ± 2.6; p = 0.76, respectively). The StO2 level of the intestine on the oral side of the anastomosis one minute after the CRA ligation was significantly lower than immediately after the anastomosis (15.9 ± 6.0 vs. 52.0 ± 2.6; p = 0.006, respectively). There was no significant difference in the StO2 level between 1 min after and 30 min after the CRA ligation (15.9 ± 6.0 vs. 12.1 ± 5.3; p = 0.41, respectively). CONCLUSION: Novel oxygen saturation imaging endoscopy was safe and feasible to assess the anastomotic integrity in the experimental model.


Asunto(s)
Anastomosis Quirúrgica , Endoscopía , Hemoglobinas/análisis , Isquemia , Oxígeno/análisis , Recto/irrigación sanguínea , Anastomosis Quirúrgica/efectos adversos , Fuga Anastomótica/diagnóstico por imagen , Fuga Anastomótica/etiología , Animales , Modelos Animales de Enfermedad , Endoscopía/métodos , Estudios de Factibilidad , Femenino , Isquemia/sangre , Isquemia/diagnóstico por imagen , Oxihemoglobinas/análisis , Recto/diagnóstico por imagen , Recto/metabolismo , Recto/cirugía , Porcinos , Resultado del Tratamiento
19.
Br J Surg ; 110(10): 1355-1358, 2023 09 06.
Artículo en Inglés | MEDLINE | ID: mdl-37552629

RESUMEN

To prevent intraoperative organ injury, surgeons strive to identify anatomical structures as early and accurately as possible during surgery. The objective of this prospective observational study was to develop artificial intelligence (AI)-based real-time automatic organ recognition models in laparoscopic surgery and to compare its performance with that of surgeons. The time taken to recognize target anatomy between AI and both expert and novice surgeons was compared. The AI models demonstrated faster recognition of target anatomy than surgeons, especially novice surgeons. These findings suggest that AI has the potential to compensate for the skill and experience gap between surgeons.


Asunto(s)
Cirugía Colorrectal , Procedimientos Quirúrgicos del Sistema Digestivo , Laparoscopía , Humanos , Inteligencia Artificial
20.
J Mater Sci Mater Med ; 29(9): 140, 2018 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-30120625

RESUMEN

Several attempts have been made to fabricate esophageal tissue engineering scaffolds. However, most of these scaffolds possess randomly oriented fibres and uncontrollable pore sizes. In order to mimic the native esophageal tissue structure, electro-hydrodynamic jetting (e-jetting) was used in this study to fabricate scaffolds with aligned fibres and controlled pore size. A hydrophilic additive, Pluronic F127 (F127), was blended with polycaprolactone (PCL) to improve the wettability of the scaffolds and hence the cell adhesion. PCL/F127 composite scaffolds with different weight ratios (0-12%) were fabricated. The wettability, phase composition, and the mechanical properties of the fabricated scaffolds were investigated. The results show that the e-jetted scaffolds have controllable fibres orientated in two perpendicular directions, which are similar to the human esophagus structure and suitable pore size for cell infiltration. In addition, the scaffolds with 8% F127 exhibited better wettability (contact angle of 14°) and an ultimate tensile strength (1.2 MPa) that mimics the native esophageal tissue. Furthermore, primary human esophageal fibroblasts were seeded on the e-jetted scaffolds. PCL/F127 scaffolds showed enhanced cell proliferation and expression of the vascular endothelial growth factor (VEGF) compared to pristine PCL scaffolds (1.5- and 25.8- fold increase, respectively; P < 0.001). An in vitro wound model made using the PCL/F127 scaffolds showed better cell migration than the PCL scaffolds. In summary, the PCL/F127 e-jetted scaffolds offer a promising strategy for the esophagus tissue repair.


Asunto(s)
Esófago , Poloxámero/química , Poliésteres/química , Ingeniería de Tejidos/métodos , Andamios del Tejido , Adhesión Celular , Proliferación Celular , Supervivencia Celular , Fibroblastos/metabolismo , Humanos , Interacciones Hidrofóbicas e Hidrofílicas , Imagenología Tridimensional , Ensayo de Materiales , Microscopía Confocal , Porosidad , Estrés Mecánico , Resistencia a la Tracción , Factor A de Crecimiento Endotelial Vascular/metabolismo , Humectabilidad , Cicatrización de Heridas , Difracción de Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA