Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Br J Surg ; 111(1)2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37935636

RESUMO

The growing availability of surgical digital data and developments in analytics such as artificial intelligence (AI) are being harnessed to improve surgical care. However, technical and cultural barriers to real-time intraoperative AI assistance exist. This early-stage clinical evaluation shows the technical feasibility of concurrently deploying several AIs in operating rooms for real-time assistance during procedures. In addition, potentially relevant clinical applications of these AI models are explored with a multidisciplinary cohort of key stakeholders.


Assuntos
Colecistectomia Laparoscópica , Humanos , Inteligência Artificial
2.
Med Image Anal ; 89: 102888, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37451133

RESUMO

Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.


Assuntos
Inteligência Artificial , Cirurgia Assistida por Computador , Humanos , Endoscopia , Algoritmos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos
3.
Sci Rep ; 13(1): 9235, 2023 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-37286660

RESUMO

Surgical video analysis facilitates education and research. However, video recordings of endoscopic surgeries can contain privacy-sensitive information, especially if the endoscopic camera is moved out of the body of patients and out-of-body scenes are recorded. Therefore, identification of out-of-body scenes in endoscopic videos is of major importance to preserve the privacy of patients and operating room staff. This study developed and validated a deep learning model for the identification of out-of-body images in endoscopic videos. The model was trained and evaluated on an internal dataset of 12 different types of laparoscopic and robotic surgeries and was externally validated on two independent multicentric test datasets of laparoscopic gastric bypass and cholecystectomy surgeries. Model performance was evaluated compared to human ground truth annotations measuring the receiver operating characteristic area under the curve (ROC AUC). The internal dataset consisting of 356,267 images from 48 videos and the two multicentric test datasets consisting of 54,385 and 58,349 images from 10 and 20 videos, respectively, were annotated. The model identified out-of-body images with 99.97% ROC AUC on the internal test dataset. Mean ± standard deviation ROC AUC on the multicentric gastric bypass dataset was 99.94 ± 0.07% and 99.71 ± 0.40% on the multicentric cholecystectomy dataset, respectively. The model can reliably identify out-of-body images in endoscopic videos and is publicly shared. This facilitates privacy preservation in surgical video analysis.


Assuntos
Aprendizado Profundo , Laparoscopia , Humanos , Privacidade , Gravação em Vídeo , Colecistectomia
4.
Med Image Anal ; 86: 102803, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37004378

RESUMO

Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.


Assuntos
Benchmarking , Laparoscopia , Humanos , Algoritmos , Salas Cirúrgicas , Fluxo de Trabalho , Aprendizado Profundo
5.
Med Image Anal ; 86: 102770, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889206

RESUMO

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Fluxo de Trabalho , Algoritmos , Aprendizado de Máquina
6.
Biomedicines ; 11(1)2023 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-36672702

RESUMO

The aim of this work was to compare the classification of cardiac MR-images of AL versus ATTR amyloidosis by neural networks and by experienced human readers. Cine-MR images and late gadolinium enhancement (LGE) images of 120 patients were studied (70 AL and 50 TTR). A VGG16 convolutional neural network (CNN) was trained with a 5-fold cross validation process, taking care to strictly distribute images of a given patient in either the training group or the test group. The analysis was performed at the patient level by averaging the predictions obtained for each image. The classification accuracy obtained between AL and ATTR amyloidosis was 0.750 for cine-CNN, 0.611 for Gado-CNN and between 0.617 and 0.675 for human readers. The corresponding AUC of the ROC curve was 0.839 for cine-CNN, 0.679 for gado-CNN (p < 0.004 vs. cine) and 0.714 for the best human reader (p < 0.007 vs. cine). Logistic regression with cine-CNN and gado-CNN, as well as analysis focused on the specific orientation plane, did not change the overall results. We conclude that cine-CNN leads to significantly better discrimination between AL and ATTR amyloidosis as compared to gado-CNN or human readers, but with lower performance than reported in studies where visual diagnosis is easy, and is currently suboptimal for clinical practice.

8.
Surg Endosc ; 36(11): 8379-8386, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35171336

RESUMO

BACKGROUND: A computer vision (CV) platform named EndoDigest was recently developed to facilitate the use of surgical videos. Specifically, EndoDigest automatically provides short video clips to effectively document the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). The aim of the present study is to validate EndoDigest on a multicentric dataset of LC videos. METHODS: LC videos from 4 centers were manually annotated with the time of the cystic duct division and an assessment of CVS criteria. Incomplete recordings, bailout procedures and procedures with an intraoperative cholangiogram were excluded. EndoDigest leveraged predictions of deep learning models for workflow analysis in a rule-based inference system designed to estimate the time of the cystic duct division. Performance was assessed by computing the error in estimating the manually annotated time of the cystic duct division. To provide concise video documentation of CVS, EndoDigest extracted video clips showing the 2 min preceding and the 30 s following the predicted cystic duct division. The relevance of the documentation was evaluated by assessing CVS in automatically extracted 2.5-min-long video clips. RESULTS: 144 of the 174 LC videos from 4 centers were analyzed. EndoDigest located the time of the cystic duct division with a mean error of 124.0 ± 270.6 s despite the use of fluorescent cholangiography in 27 procedures and great variations in surgical workflows across centers. The surgical evaluation found that 108 (75.0%) of the automatically extracted short video clips documented CVS effectively. CONCLUSIONS: EndoDigest was robust enough to reliably locate the time of the cystic duct division and efficiently video document CVS despite the highly variable workflows. Training specifically on data from each center could improve results; however, this multicentric validation shows the potential for clinical translation of this surgical data science tool to efficiently document surgical safety.


Assuntos
Colecistectomia Laparoscópica , Humanos , Colecistectomia Laparoscópica/métodos , Gravação em Vídeo , Colangiografia , Documentação , Computadores
9.
Ann Surg ; 275(5): 955-961, 2022 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-33201104

RESUMO

OBJECTIVE: To develop a deep learning model to automatically segment hepatocystic anatomy and assess the criteria defining the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). BACKGROUND: Poor implementation and subjective interpretation of CVS contributes to the stable rates of bile duct injuries in LC. As CVS is assessed visually, this task can be automated by using computer vision, an area of artificial intelligence aimed at interpreting images. METHODS: Still images from LC videos were annotated with CVS criteria and hepatocystic anatomy segmentation. A deep neural network comprising a segmentation model to highlight hepatocystic anatomy and a classification model to predict CVS criteria achievement was trained and tested using 5-fold cross validation. Intersection over union, average precision, and balanced accuracy were computed to evaluate the model performance versus the annotated ground truth. RESULTS: A total of 2854 images from 201 LC videos were annotated and 402 images were further segmented. Mean intersection over union for segmentation was 66.6%. The model assessed the achievement of CVS criteria with a mean average precision and balanced accuracy of 71.9% and 71.4%, respectively. CONCLUSIONS: Deep learning algorithms can be trained to reliably segment hepatocystic anatomy and assess CVS criteria in still laparoscopic images. Surgical-technical partnerships should be encouraged to develop and evaluate deep learning models to improve surgical safety.


Assuntos
Doenças dos Ductos Biliares , Colecistectomia Laparoscópica , Aprendizado Profundo , Inteligência Artificial , Colecistectomia Laparoscópica/métodos , Humanos , Gravação em Vídeo
10.
Ann Surg ; 274(1): e93-e95, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-33417329

RESUMO

OBJECTIVE: The aim of this study was to develop a computer vision platform to automatically locate critical events in surgical videos and provide short video clips documenting the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). BACKGROUND: Intraoperative events are typically documented through operator-dictated reports that do not always translate the operative reality. Surgical videos provide complete information on surgical procedures, but the burden associated with storing and manually analyzing full-length videos has so far limited their effective use. METHODS: A computer vision platform named EndoDigest was developed and used to analyze LC videos. The mean absolute error (MAE) of the platform in automatically locating the manually annotated time of the cystic duct division in full-length videos was assessed. The relevance of the automatically extracted short video clips was evaluated by calculating the percentage of video clips in which the CVS was assessable by surgeons. RESULTS: A total of 155 LC videos were analyzed: 55 of these videos were used to develop EndoDigest, whereas the remaining 100 were used to test it. The time of the cystic duct division was automatically located with a MAE of 62.8 ±â€Š130.4 seconds (1.95% of full-length video duration). CVS was assessable in 91% of the 2.5 minutes long video clips automatically extracted from the considered test procedures. CONCLUSIONS: Deep learning models for workflow analysis can be used to reliably locate critical events in surgical videos and document CVS in LC. Further studies are needed to assess the clinical impact of surgical data science solutions for safer laparoscopic cholecystectomy.


Assuntos
Colecistectomia Laparoscópica/normas , Documentação/métodos , Processamento de Imagem Assistida por Computador/métodos , Segurança do Paciente/normas , Melhoria de Qualidade , Gravação em Vídeo , Algoritmos , Competência Clínica , Aprendizado Profundo , Humanos , Fluxo de Trabalho
11.
Diagnostics (Basel) ; 12(1)2021 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-35054236

RESUMO

BACKGROUND: Diagnosing cardiac amyloidosis (CA) from cine-CMR (cardiac magnetic resonance) alone is not reliable. In this study, we tested if a convolutional neural network (CNN) could outperform the visual diagnosis of experienced operators. METHOD: 119 patients with cardiac amyloidosis and 122 patients with left ventricular hypertrophy (LVH) of other origins were retrospectively selected. Diastolic and systolic cine-CMR images were preprocessed and labeled. A dual-input visual geometry group (VGG ) model was used for binary image classification. All images belonging to the same patient were distributed in the same set. Accuracy and area under the curve (AUC) were calculated per frame and per patient from a 40% held-out test set. Results were compared to a visual analysis assessed by three experienced operators. RESULTS: frame-based comparisons between humans and a CNN provided an accuracy of 0.605 vs. 0.746 (p < 0.0008) and an AUC of 0.630 vs. 0.824 (p < 0.0001). Patient-based comparisons provided an accuracy of 0.660 vs. 0.825 (p < 0.008) and an AUC of 0.727 vs. 0.895 (p < 0.002). CONCLUSION: based on cine-CMR images alone, a CNN is able to discriminate cardiac amyloidosis from LVH of other origins better than experienced human operators (15 to 20 points more in absolute value for accuracy and AUC), demonstrating a unique capability to identify what the eyes cannot see through classical radiological analysis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA