Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Dig Endosc ; 2024 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-39031797

RESUMEN

OBJECTIVES: Colonoscopy (CS) is an important screening method for the early detection and removal of precancerous lesions. The stool state during bowel preparation (BP) should be properly evaluated to perform CS with sufficient quality. This study aimed to develop a smartphone application (app) with an artificial intelligence (AI) model for stool state evaluation during BP and to investigate whether the use of the app could maintain an adequate quality of CS. METHODS: First, stool images were collected in our hospital to develop the AI model and were categorized into grade 1 (solid or muddy stools), grade 2 (cloudy watery stools), and grade 3 (clear watery stools). The AI model for stool state evaluation (grades 1-3) was constructed and internally verified using the cross-validation method. Second, a prospective study was conducted on the quality of CS using the app in our hospital. The primary end-point was the proportion of patients who achieved Boston Bowel Preparation Scale (BBPS) ≥6 among those who successfully used the app. RESULTS: The AI model showed mean accuracy rates of 90.2%, 65.0%, and 89.3 for grades 1, 2, and 3, respectively. The prospective study enrolled 106 patients and revealed that 99.0% (95% confidence interval 95.3-99.9%) of patients achieved a BBPS ≥6. CONCLUSION: The proportion of patients with BBPS ≥6 during CS using the developed app exceeded the set expected value. This app could contribute to the performance of high-quality CS in clinical practice.

2.
Langenbecks Arch Surg ; 409(1): 213, 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38995411

RESUMEN

PURPOSE: Laparoscopic distal gastrectomy (LDG) is a difficult procedure for early career surgeons. Artificial intelligence (AI)-based surgical step recognition is crucial for establishing context-aware computer-aided surgery systems. In this study, we aimed to develop an automatic recognition model for LDG using AI and evaluate its performance. METHODS: Patients who underwent LDG at our institution in 2019 were included in this study. Surgical video data were classified into the following nine steps: (1) Port insertion; (2) Lymphadenectomy on the left side of the greater curvature; (3) Lymphadenectomy on the right side of the greater curvature; (4) Division of the duodenum; (5) Lymphadenectomy of the suprapancreatic area; (6) Lymphadenectomy on the lesser curvature; (7) Division of the stomach; (8) Reconstruction; and (9) From reconstruction to completion of surgery. Two gastric surgeons manually assigned all annotation labels. Convolutional neural network (CNN)-based image classification was further employed to identify surgical steps. RESULTS: The dataset comprised 40 LDG videos. Over 1,000,000 frames with annotated labels of the LDG steps were used to train the deep-learning model, with 30 and 10 surgical videos for training and validation, respectively. The classification accuracies of the developed models were precision, 0.88; recall, 0.87; F1 score, 0.88; and overall accuracy, 0.89. The inference speed of the proposed model was 32 ps. CONCLUSION: The developed CNN model automatically recognized the LDG surgical process with relatively high accuracy. Adding more data to this model could provide a fundamental technology that could be used in the development of future surgical instruments.


Asunto(s)
Inteligencia Artificial , Gastrectomía , Laparoscopía , Prueba de Estudio Conceptual , Neoplasias Gástricas , Humanos , Gastrectomía/métodos , Laparoscopía/métodos , Neoplasias Gástricas/cirugía , Neoplasias Gástricas/patología , Femenino , Masculino , Persona de Mediana Edad , Cirugía Asistida por Computador/métodos , Anciano , Escisión del Ganglio Linfático
3.
Med Image Anal ; 91: 102985, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37844472

RESUMEN

This paper introduces the "SurgT: Surgical Tracking" challenge which was organized in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022). There were two purposes for the creation of this challenge: (1) the establishment of the first standardized benchmark for the research community to assess soft-tissue trackers; and (2) to encourage the development of unsupervised deep learning methods, given the lack of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters, have been provided. Participants were assigned the task of developing algorithms to track the movement of soft tissues, represented by bounding boxes, in stereo endoscopic videos. At the end of the challenge, the developed methods were assessed on a previously hidden test subset. This assessment uses benchmarking metrics that were purposely developed for this challenge, to verify the efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The metric used for ranking the methods was the Expected Average Overlap (EAO) score, which measures the average overlap between a tracker's and the ground truth bounding boxes. Coming first in the challenge was the deep learning submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs ARFlow to estimate unsupervised dense optical flow from cropped images, using photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses deep learning for surgical tool segmentation on top of a non-deep learning baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The results from this challenge show that currently, non-deep learning methods are still competitive. The dataset and benchmarking tool created for this challenge have been made publicly available at https://surgt.grand-challenge.org/. This challenge is expected to contribute to the development of autonomous robotic surgery and other digital surgical technologies.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Humanos , Benchmarking , Algoritmos , Endoscopía , Procesamiento de Imagen Asistido por Computador/métodos
4.
JAMA Surg ; 158(8): e231131, 2023 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-37285142

RESUMEN

Importance: Automatic surgical skill assessment with artificial intelligence (AI) is more objective than manual video review-based skill assessment and can reduce human burden. Standardization of surgical field development is an important aspect of this skill assessment. Objective: To develop a deep learning model that can recognize the standardized surgical fields in laparoscopic sigmoid colon resection and to evaluate the feasibility of automatic surgical skill assessment based on the concordance of the standardized surgical field development using the proposed deep learning model. Design, Setting, and Participants: This retrospective diagnostic study used intraoperative videos of laparoscopic colorectal surgery submitted to the Japan Society for Endoscopic Surgery between August 2016 and November 2017. Data were analyzed from April 2020 to September 2022. Interventions: Videos of surgery performed by expert surgeons with Endoscopic Surgical Skill Qualification System (ESSQS) scores higher than 75 were used to construct a deep learning model able to recognize a standardized surgical field and output its similarity to standardized surgical field development as an AI confidence score (AICS). Other videos were extracted as the validation set. Main Outcomes and Measures: Videos with scores less than or greater than 2 SDs from the mean were defined as the low- and high-score groups, respectively. The correlation between AICS and ESSQS score and the screening performance using AICS for low- and high-score groups were analyzed. Results: The sample included 650 intraoperative videos, 60 of which were used for model construction and 60 for validation. The Spearman rank correlation coefficient between the AICS and ESSQS score was 0.81. The receiver operating characteristic (ROC) curves for the screening of the low- and high-score groups were plotted, and the areas under the ROC curve for the low- and high-score group screening were 0.93 and 0.94, respectively. Conclusions and Relevance: The AICS from the developed model strongly correlated with the ESSQS score, demonstrating the model's feasibility for use as a method of automatic surgical skill assessment. The findings also suggest the feasibility of the proposed model for creating an automated screening system for surgical skills and its potential application to other types of endoscopic procedures.


Asunto(s)
Procedimientos Quirúrgicos del Sistema Digestivo , Laparoscopía , Humanos , Inteligencia Artificial , Estudios Retrospectivos , Laparoscopía/métodos , Curva ROC
5.
Comput Methods Programs Biomed ; 236: 107561, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37119774

RESUMEN

BACKGROUND AND OBJECTIVE: In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS: The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS: Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION: The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.


Asunto(s)
Algoritmos , Procedimientos Quirúrgicos Robotizados , Humanos , Flujo de Trabajo , Procedimientos Quirúrgicos Robotizados/métodos
6.
BJS Open ; 7(2)2023 03 07.
Artículo en Inglés | MEDLINE | ID: mdl-36882082

RESUMEN

BACKGROUND: Purse-string suture in transanal total mesorectal excision is a key procedural step. The aims of this study were to develop an automatic skill assessment system for purse-string suture in transanal total mesorectal excision using deep learning and to evaluate the reliability of the score output from the proposed system. METHODS: Purse-string suturing extracted from consecutive transanal total mesorectal excision videos was manually scored using a performance rubric scale and computed into a deep learning model as training data. Deep learning-based image regression analysis was performed, and the purse-string suture skill scores predicted by the trained deep learning model (artificial intelligence score) were output as continuous variables. The outcomes of interest were the correlation, assessed using Spearman's rank correlation coefficient, between the artificial intelligence score and the manual score, purse-string suture time, and surgeon's experience. RESULTS: Forty-five videos obtained from five surgeons were evaluated. The mean(s.d.) total manual score was 9.2(2.7) points, the mean(s.d.) total artificial intelligence score was 10.2(3.9) points, and the mean(s.d.) absolute error between the artificial intelligence and manual scores was 0.42(0.39). Further, the artificial intelligence score significantly correlated with the purse-string suture time (correlation coefficient = -0.728) and surgeon's experience (P< 0.001). CONCLUSION: An automatic purse-string suture skill assessment system using deep learning-based video analysis was shown to be feasible, and the results indicated that the artificial intelligence score was reliable. This application could be expanded to other endoscopic surgeries and procedures.


Asunto(s)
Aprendizaje Profundo , Neoplasias del Recto , Humanos , Inteligencia Artificial , Reproducibilidad de los Resultados , Suturas
7.
Urology ; 173: 98-103, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36572225

RESUMEN

OBJECTIVE: To develop a convolutional neural network to recognize the seminal vesicle and vas deferens (SV-VD) in the posterior approach of robot-assisted radical prostatectomy (RARP) and assess the performance of the convolutional neural network model under clinically relevant conditions. METHODS: Intraoperative videos of robot-assisted radical prostatectomy performed by the posterior approach from 3 institutions were obtained between 2019 and 2020. Using SV-VD dissection videos, semantic segmentation of the seminal vesicle-vas deferens area was performed using a convolutional neural network-based approach. The dataset was split into training and test data in a 10:3 ratio. The average time required by 6 novice urologists to correctly recognize the SV-VD was compared using intraoperative videos with and without segmentation masks generated by the convolutional neural network model, which was evaluated with the test data using the Dice similarity coefficient. Training and test datasets were compared using the Mann-Whitney U-test and chi-square test. Time required to recognize the SV-VD was evaluated using the Mann-Whitney U-test. RESULTS: From 26 patient videos, 1 040 images were created (520 SV-VD annotated images and 520 SV-VD non-displayed images). The convolutional neural network model had a Dice similarity coefficient value of 0.73 in the test data. Compared with original videos, videos with the generated segmentation mask promoted significantly faster seminal vesicle and vas deferens recognition (P < .001). CONCLUSION: The convolutional neural network model provides accurate recognition of the SV-VD in the posterior approach RARP, which may be helpful, especially for novice urologists.


Asunto(s)
Aprendizaje Profundo , Robótica , Masculino , Humanos , Vesículas Seminales , Conducto Deferente , Prostatectomía/métodos , Procesamiento de Imagen Asistido por Computador
9.
Int J Surg ; 105: 106856, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36031068

RESUMEN

BACKGROUND: To perform accurate laparoscopic hepatectomy (LH) without injury, novel intraoperative systems of computer-assisted surgery (CAS) for LH are expected. Automated surgical workflow identification is a key component for developing CAS systems. This study aimed to develop a deep-learning model for automated surgical step identification in LH. MATERIALS AND METHODS: We constructed a dataset comprising 40 cases of pure LH videos; 30 and 10 cases were used for the training and testing datasets, respectively. Each video was divided into 30 frames per second as static images. LH was divided into nine surgical steps (Steps 0-8), and each frame was annotated as being within one of these steps in the training set. After extracorporeal actions (Step 0) were excluded from the video, two deep-learning models of automated surgical step identification for 8-step and 6-step models were developed using a convolutional neural network (Models 1 & 2). Each frame in the testing dataset was classified using the constructed model performed in real-time. RESULTS: Above 8 million frames were annotated for surgical step identification from the pure LH videos. The overall accuracy of Model 1 was 0.891, which was increased to 0.947 in Model 2. Median and average accuracy for each case in Model 2 was 0.927 (range, 0.884-0.997) and 0.937 ± 0.04 (standardized difference), respectively. Real-time automated surgical step identification was performed at 21 frames per second. CONCLUSIONS: We developed a highly accurate deep-learning model for surgical step identification in pure LH. Our model could be applied to intraoperative systems of CAS.


Asunto(s)
Inteligencia Artificial , Laparoscopía , Hepatectomía , Humanos , Laparoscopía/métodos , Redes Neurales de la Computación , Flujo de Trabajo
10.
Surg Endosc ; 36(8): 6105-6112, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35764837

RESUMEN

BACKGROUND: Recognition of the inferior mesenteric artery (IMA) during colorectal cancer surgery is crucial to avoid intraoperative hemorrhage and define the appropriate lymph node dissection line. This retrospective feasibility study aimed to develop an IMA anatomical recognition model for laparoscopic colorectal resection using deep learning, and to evaluate its recognition accuracy and real-time performance. METHODS: A complete multi-institutional surgical video database, LapSig300 was used for this study. Intraoperative videos of 60 patients who underwent laparoscopic sigmoid colon resection or high anterior resection were randomly extracted from the database and included. Deep learning-based semantic segmentation accuracy and real-time performance of the developed IMA recognition model were evaluated using Dice similarity coefficient (DSC) and frames per second (FPS), respectively. RESULTS: In a fivefold cross-validation conducted using 1200 annotated images for the IMA semantic segmentation task, the mean DSC value was 0.798 (± 0.0161 SD) and the maximum DSC was 0.816. The proposed deep learning model operated at a speed of over 12 FPS. CONCLUSION: To the best of our knowledge, this is the first study to evaluate the feasibility of real-time vascular anatomical navigation during laparoscopic colorectal surgery using a deep learning-based semantic segmentation approach. This experimental study was conducted to confirm the feasibility of our model; therefore, its safety and usefulness were not verified in clinical practice. However, the proposed deep learning model demonstrated a relatively high accuracy in recognizing IMA in intraoperative images. The proposed approach has potential application in image navigation systems for unfixed soft tissues and organs during various laparoscopic surgeries.


Asunto(s)
Laparoscopía , Arteria Mesentérica Inferior , Colon Sigmoide/irrigación sanguínea , Humanos , Procesamiento de Imagen Asistido por Computador , Laparoscopía/métodos , Escisión del Ganglio Linfático/métodos , Arteria Mesentérica Inferior/cirugía , Estudios Retrospectivos
11.
Surg Endosc ; 36(7): 5531-5539, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35476155

RESUMEN

BACKGROUND: Artificial intelligence (AI) has been largely investigated in the field of surgery, particularly in quality assurance. However, AI-guided navigation during surgery has not yet been put into practice because a sufficient level of performance has not been reached. We aimed to develop deep learning-based AI image processing software to identify the location of the recurrent laryngeal nerve during thoracoscopic esophagectomy and determine whether the incidence of recurrent laryngeal nerve paralysis is reduced using this software. METHODS: More than 3000 images extracted from 20 thoracoscopic esophagectomy videos and 40 images extracted from 8 thoracoscopic esophagectomy videos were annotated for identification of the recurrent laryngeal nerve. The Dice coefficient was used to assess the detection performance of the model and that of surgeons (specialized esophageal surgeons and certified general gastrointestinal surgeons). The performance was compared using a test set. RESULTS: The average Dice coefficient of the AI model was 0.58. This was not significantly different from the Dice coefficient of the group of specialized esophageal surgeons (P = 0.26); however, it was significantly higher than that of the group of certified general gastrointestinal surgeons (P = 0.019). CONCLUSIONS: Our software's performance in identification of the recurrent laryngeal nerve was superior to that of general surgeons and almost reached that of specialized surgeons. Our software provides real-time identification and will be useful for thoracoscopic esophagectomy after further developments.


Asunto(s)
Neoplasias Esofágicas , Esofagectomía , Inteligencia Artificial , Neoplasias Esofágicas/cirugía , Esofagectomía/métodos , Humanos , Escisión del Ganglio Linfático/métodos , Nervio Laríngeo Recurrente/cirugía , Estudios Retrospectivos
12.
Surg Endosc ; 36(2): 1143-1151, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-33825016

RESUMEN

BACKGROUND: Dividing a surgical procedure into a sequence of identifiable and meaningful steps facilitates intraoperative video data acquisition and storage. These efforts are especially valuable for technically challenging procedures that require intraoperative video analysis, such as transanal total mesorectal excision (TaTME); however, manual video indexing is time-consuming. Thus, in this study, we constructed an annotated video dataset for TaTME with surgical step information and evaluated the performance of a deep learning model in recognizing the surgical steps in TaTME. METHODS: This was a single-institutional retrospective feasibility study. All TaTME intraoperative videos were divided into frames. Each frame was manually annotated as one of the following major steps: (1) purse-string closure; (2) full thickness transection of the rectal wall; (3) down-to-up dissection; (4) dissection after rendezvous; and (5) purse-string suture for stapled anastomosis. Steps 3 and 4 were each further classified into four sub-steps, specifically, for dissection of the anterior, posterior, right, and left planes. A convolutional neural network-based deep learning model, Xception, was utilized for the surgical step classification task. RESULTS: Our dataset containing 50 TaTME videos was randomly divided into two subsets for training and testing with 40 and 10 videos, respectively. The overall accuracy obtained for all classification steps was 93.2%. By contrast, when sub-step classification was included in the performance analysis, a mean accuracy (± standard deviation) of 78% (± 5%), with a maximum accuracy of 85%, was obtained. CONCLUSIONS: To the best of our knowledge, this is the first study based on automatic surgical step classification for TaTME. Our deep learning model self-learned and recognized the classification steps in TaTME videos with high accuracy after training. Thus, our model can be applied to a system for intraoperative guidance or for postoperative video indexing and analysis in TaTME procedures.


Asunto(s)
Aprendizaje Profundo , Laparoscopía , Proctectomía , Neoplasias del Recto , Cirugía Endoscópica Transanal , Humanos , Laparoscopía/métodos , Complicaciones Posoperatorias/cirugía , Proctectomía/educación , Neoplasias del Recto/cirugía , Recto/cirugía , Estudios Retrospectivos , Cirugía Endoscópica Transanal/métodos
13.
Dig Endosc ; 34(5): 1021-1029, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34748658

RESUMEN

BACKGROUND: Artificial intelligence (AI) has made considerable progress in image recognition, especially in the analysis of endoscopic images. The availability of large-scale annotated datasets has contributed to the recent progress in this field. Datasets of high-quality annotated endoscopic images are widely available, particularly in Japan. A system for collecting annotated data reported daily could aid in accumulating a significant number of high-quality annotated datasets. AIM: We assessed the validity of using daily annotated endoscopic images in a constructed reporting system for a prototype AI model for polyp detection. METHODS: We constructed an automated collection system for daily annotated datasets from an endoscopy reporting system. The key images were selected and annotated for each case only during daily practice, not to be performed retrospectively. We automatically extracted annotated endoscopic images of diminutive colon polyps that had been diagnosed (study period March-September 2018) using the keywords of diagnostic information, and additionally collect the normal colon images. The collected dataset was devised into training and validation to build and evaluate the AI system. The detection model was developed using a deep learning algorithm, RetinaNet. RESULTS: The automated system collected endoscopic images (47,391) from colonoscopies (745), and extracted key colon polyp images (1356) with localized annotations. The sensitivity, specificity, and accuracy of our AI model were 97.0%, 97.7%, and 97.3% (n = 300), respectively. CONCLUSION: The automated system enabled the development of a high-performance colon polyp detector using images in endoscopy reporting system without the efforts of retrospective annotation works.


Asunto(s)
Inteligencia Artificial , Pólipos del Colon , Colon , Pólipos del Colon/diagnóstico por imagen , Colonoscopía/métodos , Humanos , Estudios Retrospectivos
14.
JAMA Netw Open ; 4(8): e2120786, 2021 08 02.
Artículo en Inglés | MEDLINE | ID: mdl-34387676

RESUMEN

Importance: A high level of surgical skill is essential to prevent intraoperative problems. One important aspect of surgical education is surgical skill assessment, with pertinent feedback facilitating efficient skill acquisition by novices. Objectives: To develop a 3-dimensional (3-D) convolutional neural network (CNN) model for automatic surgical skill assessment and to evaluate the performance of the model in classification tasks by using laparoscopic colorectal surgical videos. Design, Setting, and Participants: This prognostic study used surgical videos acquired prior to 2017. In total, 650 laparoscopic colorectal surgical videos were provided for study purposes by the Japan Society for Endoscopic Surgery, and 74 were randomly extracted. Every video had highly reliable scores based on the Endoscopic Surgical Skill Qualification System (ESSQS, range 1-100, with higher scores indicating greater surgical skill) established by the society. Data were analyzed June to December 2020. Main Outcomes and Measures: From the groups with scores less than the difference between the mean and 2 SDs, within the range spanning the mean and 1 SD, and greater than the sum of the mean and 2 SDs, 17, 26, and 31 videos, respectively, were randomly extracted. In total, 1480 video clips with a length of 40 seconds each were extracted for each surgical step (medial mobilization, lateral mobilization, inferior mesenteric artery transection, and mesorectal transection) and separated into 1184 training sets and 296 test sets. Automatic surgical skill classification was performed based on spatiotemporal video analysis using the fully automated 3-D CNN model, and classification accuracies and screening accuracies for the groups with scores less than the mean minus 2 SDs and greater than the mean plus 2 SDs were calculated. Results: The mean (SD) ESSQS score of all 650 intraoperative videos was 66.2 (8.6) points and for the 74 videos used in the study, 67.6 (16.1) points. The proposed 3-D CNN model automatically classified video clips into groups with scores less than the mean minus 2 SDs, within 1 SD of the mean, and greater than the mean plus 2 SDs with a mean (SD) accuracy of 75.0% (6.3%). The highest accuracy was 83.8% for the inferior mesenteric artery transection. The model also screened for the group with scores less than the mean minus 2 SDs with 94.1% sensitivity and 96.5% specificity and for group with greater than the mean plus 2 SDs with 87.1% sensitivity and 86.0% specificity. Conclusions and Relevance: The results of this prognostic study showed that the proposed 3-D CNN model classified laparoscopic colorectal surgical videos with sufficient accuracy to be used for screening groups with scores greater than the mean plus 2 SDs and less than the mean minus 2 SDs. The proposed approach was fully automatic and easy to use for various types of surgery, and no special annotations or kinetics data extraction were required, indicating that this approach warrants further development for application to automatic surgical skill assessment.


Asunto(s)
Competencia Clínica , Cirugía Colorrectal/normas , Laparoscopía/normas , Redes Neurales de la Computación , Grabación en Video , Humanos , Japón
15.
Surg Endosc ; 35(6): 2493-2499, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-32430531

RESUMEN

BACKGROUND: Urethral injuries (UIs) are significant complications pertaining to transanal total mesorectal excision (TaTME). It is important for surgeons to identify the prostate during TaTME to prevent UI occurrence; intraoperative image navigation could be considered useful in this regard. This study aims at developing a deep learning model for real-time automatic prostate segmentation based on intraoperative video during TaTME. The proposed model's performance has been evaluated. METHODS: This was a single-institution retrospective feasibility study. Semantic segmentation of the prostate area was performed using a convolutional neural network (CNN)-based approach. DeepLab v3 plus was utilized as the CNN model for the semantic segmentation task. The Dice coefficient (DC), which is calculated based on the overlapping area between the ground truth and predicted area, was utilized as an evaluation metric for the proposed model. RESULTS: Five hundred prostate images were randomly extracted from 17 TaTME videos, and the prostate area was manually annotated on each image. Fivefold cross-validation tests were performed, and as observed, the average DC value equaled 0.71 ± 0.04, the maximum value being 0.77. Additionally, the model operated at 11 fps, which provides acceptable real-time performance. CONCLUSIONS: To the best of the authors' knowledge, this is the first effort toward realization of computer-assisted TaTME, and results obtained in this study suggest that the proposed deep learning model can be utilized for real-time automatic prostate segmentation. In future endeavors, the accuracy and performance of the proposed model will be improved to enable its use in practical applications, and its capability to reduce UI risks during TaTME will be verified.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Próstata , Computadores , Estudios de Factibilidad , Humanos , Masculino , Próstata/diagnóstico por imagen , Próstata/cirugía , Estudios Retrospectivos
16.
Head Neck ; 42(9): 2581-2592, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32542892

RESUMEN

BACKGROUND: There are no published reports evaluating the ability of artificial intelligence (AI) in the endoscopic diagnosis of superficial laryngopharyngeal cancer (SLPC). We presented our newly developed diagnostic AI model for SLPC detection. METHODS: We used RetinaNet for object detection. SLPC and normal laryngopharyngeal mucosal images obtained from narrow-band imaging were used for the learning and validation data sets. Each independent data set comprised 400 SLPC and 800 normal mucosal images. The diagnostic AI model was constructed stage-wise and evaluated at each learning stage using validation data sets. RESULTS: In the validation data sets (100 SLPC cases), the median tumor size was 13.2 mm; flat/elevated/depressed types were found in 77/21/2 cases. Sensitivity, specificity, and accuracy improved each time a learning image was added and were 95.5%, 98.4%, and 97.3%, respectively, after learning all SLPC and normal mucosal images. CONCLUSIONS: The novel AI model is helpful for detection of laryngopharyngeal cancer at an early stage.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Inteligencia Artificial , Humanos , Imagen de Banda Estrecha , Sensibilidad y Especificidad
17.
Int J Surg ; 79: 88-94, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32413503

RESUMEN

BACKGROUND: Identifying laparoscopic surgical videos using artificial intelligence (AI) facilitates the automation of several currently time-consuming manual processes, including video analysis, indexing, and video-based skill assessment. This study aimed to construct a large annotated dataset comprising laparoscopic colorectal surgery (LCRS) videos from multiple institutions and evaluate the accuracy of automatic recognition for surgical phase, action, and tool by combining this dataset with AI. MATERIALS AND METHODS: A total of 300 intraoperative videos were collected from 19 high-volume centers. A series of surgical workflows were classified into 9 phases and 3 actions, and the area of 5 tools were assigned by painting. More than 82 million frames were annotated for a phase and action classification task, and 4000 frames were annotated for a tool segmentation task. Of these frames, 80% were used for the training dataset and 20% for the test dataset. A convolutional neural network (CNN) was used to analyze the videos. Intersection over union (IoU) was used as the evaluation metric for tool recognition. RESULTS: The overall accuracies for the automatic surgical phase and action classification task were 81.0% and 83.2%, respectively. The mean IoU for the automatic tool segmentation task for 5 tools was 51.2%. CONCLUSIONS: A large annotated dataset of LCRS videos was constructed, and the phase, action, and tool were recognized with high accuracy using AI. Our dataset has potential uses in medical applications such as automatic video indexing and surgical skill assessments. Open research will assist in improving CNN models by making our dataset available in the field of computer vision.


Asunto(s)
Colon/cirugía , Laparoscopía/métodos , Redes Neurales de la Computación , Recto/cirugía , Flujo de Trabajo , Adulto , Anciano , Anciano de 80 o más Años , Competencia Clínica , Femenino , Humanos , Masculino , Persona de Mediana Edad
18.
Surg Endosc ; 34(11): 4924-4931, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-31797047

RESUMEN

BACKGROUND: Automatic surgical workflow recognition is a key component for developing the context-aware computer-assisted surgery (CA-CAS) systems. However, automatic surgical phase recognition focused on colorectal surgery has not been reported. We aimed to develop a deep learning model for automatic surgical phase recognition based on laparoscopic sigmoidectomy (Lap-S) videos, which could be used for real-time phase recognition, and to clarify the accuracies of the automatic surgical phase and action recognitions using visual information. METHODS: The dataset used contained 71 cases of Lap-S. The video data were divided into frame units every 1/30 s as static images. Every Lap-S video was manually divided into 11 surgical phases (Phases 0-10) and manually annotated for each surgical action on every frame. The model was generated based on the training data. Validation of the model was performed on a set of unseen test data. Convolutional neural network (CNN)-based deep learning was also used. RESULTS: The average surgical time was 175 min (± 43 min SD), with the individual surgical phases also showing high variations in the duration between cases. Each surgery started in the first phase (Phase 0) and ended in the last phase (Phase 10), and phase transitions occurred 14 (± 2 SD) times per procedure on an average. The accuracy of the automatic surgical phase recognition was 91.9% and those for the automatic surgical action recognition of extracorporeal action and irrigation were 89.4% and 82.5%, respectively. Moreover, this system could perform real-time automatic surgical phase recognition at 32 fps. CONCLUSIONS: The CNN-based deep learning approach enabled the recognition of surgical phases and actions in 71 Lap-S cases based on manually annotated data. This system could perform automatic surgical phase recognition and automatic target surgical action recognition with high accuracy. Moreover, this study showed the feasibility of real-time automatic surgical phase recognition with high frame rate.


Asunto(s)
Colectomía/métodos , Colon Sigmoide/cirugía , Aprendizaje Profundo , Laparoscopía/métodos , Cirugía Asistida por Computador/métodos , Sistemas de Computación , Humanos , Tempo Operativo , Estudios Retrospectivos , Flujo de Trabajo
19.
PLoS One ; 13(8): e0201365, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30086162

RESUMEN

The BRAFV600E mutation is the most prevalent driver mutation of sporadic papillary thyroid cancers (PTC). It was previously shown that prenatal or postnatal expression of BRAFV600E under elevated TSH levels induced thyroid cancers in several genetically engineered mouse models. In contrast, we found that postnatal expression of BRAFV600E under physiologic TSH levels failed to develop thyroid cancers in conditional transgenic Tg(LNL-BrafV600E) mice injected in the thyroid with adenovirus expressing Cre under control of the thyroglobulin promoter (Ad-TgP-Cre). In this study, we first demonstrated that BrafCA/+ mice carrying a Cre-activated allele of BrafV600E exhibited higher transformation efficiency than Tg(LNL-BrafV600E) mice when crossed with TPO-Cre mice. As a result, most BrafCA/+ mice injected with Ad-TgP-Cre developed thyroid cancers in 1 year. Histologic examination showed follicular or cribriform-like structures with positive TG and PAX staining and no colloid formation. Some tumors also had papillary structure component with lower TG expression. Concomitant PTEN haploinsufficiency in injected BrafCA/+;Ptenf/+ mice induced tumors predominantly exhibiting papillary structures and occasionally undifferentiated solid patterns with normal to low PAX expression and low to absent TG expression. Typical nuclear features of human PTC and extrathyroidal invasion were observed primarily in the latter mice. The percentages of pERK-, Ki67- and TUNEL-positive cells were all higher in the latter. In conclusion, we established novel thyroid cancer mouse models in which postnatal expression of BRAFV600E alone under physiologic TSH levels induces PTC. Simultaneous PTEN haploinsufficiency tends to promote tumor growth and de-differentiation.


Asunto(s)
Haploinsuficiencia , Mutación Missense , Neoplasias Experimentales , Fosfohidrolasa PTEN , Proteínas Proto-Oncogénicas B-raf , Cáncer Papilar Tiroideo , Neoplasias de la Tiroides , Tirotropina/sangre , Sustitución de Aminoácidos , Animales , Ratones , Ratones Transgénicos , Neoplasias Experimentales/enzimología , Neoplasias Experimentales/genética , Neoplasias Experimentales/patología , Fosfohidrolasa PTEN/genética , Fosfohidrolasa PTEN/metabolismo , Proteínas Proto-Oncogénicas B-raf/genética , Proteínas Proto-Oncogénicas B-raf/metabolismo , Cáncer Papilar Tiroideo/enzimología , Cáncer Papilar Tiroideo/genética , Cáncer Papilar Tiroideo/patología , Neoplasias de la Tiroides/enzimología , Neoplasias de la Tiroides/genética , Neoplasias de la Tiroides/patología
20.
Extremophiles ; 9(2): 127-34, 2005 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-15538645

RESUMEN

The group II chaperonin from the hyperthermophilic archaeum Pyrococcus horikoshii OT3 (PhCPN) and its functional cooperation with the cognate prefoldin were investigated. PhCPN existed as a homo-oligomer in a double-ring structure, which protected the citrate synthase of a porcine heart from thermal aggregation at 45 degrees C, and did the same on the isopropylmalate dehydrogenase (IPMDH) of a thermophilic bacterium, Thermus thermophilus HB8, at 90 degrees C. PhCPN also enhanced the refolding of green fluorescent protein (GFP), which had been unfolded by low pH, in an ATP-dependent manner. Unexpectedly, functional cooperation between PhCPN and Pyrococcus prefoldin (PhPFD) in the refolding of GFP was not observed. Instead, cooperation between PhCPN and PhPFD was observed in the refolding of IPMDH unfolded with guanidine hydrochloride. Although PhCPN alone was not effective in the refolding of IPMDH, the refolding efficiency was enhanced by the cooperation of PhCPN with PhPFD.


Asunto(s)
Chaperoninas/química , Chaperoninas/genética , Pyrococcus horikoshii/metabolismo , Adenosina Trifosfatasas/química , Adenosina Trifosfato/química , Animales , Clonación Molecular , Electroforesis en Gel de Poliacrilamida , Proteínas Fluorescentes Verdes/química , Proteínas Fluorescentes Verdes/metabolismo , Guanidina/química , Calor , Concentración de Iones de Hidrógeno , Microscopía Electrónica , Chaperonas Moleculares/química , Plásmidos/metabolismo , Reacción en Cadena de la Polimerasa , Pliegue de Proteína , Pyrococcus/metabolismo , Espectrometría de Fluorescencia , Porcinos , Temperatura , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA