Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Surg Endosc ; 37(11): 8690-8707, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37516693

RESUMO

BACKGROUND: Surgery generates a vast amount of data from each procedure. Particularly video data provides significant value for surgical research, clinical outcome assessment, quality control, and education. The data lifecycle is influenced by various factors, including data structure, acquisition, storage, and sharing; data use and exploration, and finally data governance, which encompasses all ethical and legal regulations associated with the data. There is a universal need among stakeholders in surgical data science to establish standardized frameworks that address all aspects of this lifecycle to ensure data quality and purpose. METHODS: Working groups were formed, among 48 representatives from academia and industry, including clinicians, computer scientists and industry representatives. These working groups focused on: Data Use, Data Structure, Data Exploration, and Data Governance. After working group and panel discussions, a modified Delphi process was conducted. RESULTS: The resulting Delphi consensus provides conceptualized and structured recommendations for each domain related to surgical video data. We identified the key stakeholders within the data lifecycle and formulated comprehensive, easily understandable, and widely applicable guidelines for data utilization. Standardization of data structure should encompass format and quality, data sources, documentation, metadata, and account for biases within the data. To foster scientific data exploration, datasets should reflect diversity and remain adaptable to future applications. Data governance must be transparent to all stakeholders, addressing legal and ethical considerations surrounding the data. CONCLUSION: This consensus presents essential recommendations around the generation of standardized and diverse surgical video databanks, accounting for multiple stakeholders involved in data generation and use throughout its lifecycle. Following the SAGES annotation framework, we lay the foundation for standardization of data use, structure, and exploration. A detailed exploration of requirements for adequate data governance will follow.


Assuntos
Inteligência Artificial , Melhoria de Qualidade , Humanos , Consenso , Coleta de Dados
2.
Surg Endosc ; 36(9): 6832-6840, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35031869

RESUMO

BACKGROUND: Operative courses of laparoscopic cholecystectomies vary widely due to differing pathologies. Efforts to assess intra-operative difficulty include the Parkland grading scale (PGS), which scores inflammation from the initial view of the gallbladder on a 1-5 scale. We investigated the impact of PGS on intra-operative outcomes, including laparoscopic duration, attainment of the critical view of safety (CVS), and gallbladder injury. We additionally trained an artificial intelligence (AI) model to identify PGS. METHODS: One surgeon labeled surgical phases, PGS, CVS attainment, and gallbladder injury in 200 cholecystectomy videos. We used multilevel Bayesian regression models to analyze the PGS's effect on intra-operative outcomes. We trained AI models to identify PGS from an initial view of the gallbladder and compared model performance to annotations by a second surgeon. RESULTS: Slightly inflamed gallbladders (PGS-2) minimally increased duration, adding 2.7 [95% compatibility interval (CI) 0.3-7.0] minutes to an operation. This contrasted with maximally inflamed gallbladders (PGS-5), where on average 16.9 (95% CI 4.4-33.9) minutes were added, with 31.3 (95% CI 8.0-67.5) minutes added for the most affected surgeon. Inadvertent gallbladder injury occurred in 25% of cases, with a minimal increase in gallbladder injury observed with added inflammation. However, up to a 28% (95% CI - 2, 63) increase in probability of a gallbladder hole during PGS-5 cases was observed for some surgeons. Inflammation had no substantial effect on whether or not a surgeon attained the CVS. An AI model could reliably (Krippendorff's α = 0.71, 95% CI 0.65-0.77) quantify inflammation when compared to a second surgeon (α = 0.82, 95% CI 0.75-0.87). CONCLUSIONS: An AI model can identify the degree of gallbladder inflammation, which is predictive of cholecystectomy intra-operative course. This automated assessment could be useful for operating room workflow optimization and for targeted per-surgeon and per-resident feedback to accelerate acquisition of operative skills.


Assuntos
Colecistectomia Laparoscópica , Colecistite , Doenças da Vesícula Biliar , Inteligência Artificial , Teorema de Bayes , Colecistectomia , Colecistectomia Laparoscópica/efeitos adversos , Colecistite/cirurgia , Vesícula Biliar/patologia , Vesícula Biliar/cirurgia , Doenças da Vesícula Biliar/patologia , Doenças da Vesícula Biliar/cirurgia , Humanos , Inflamação/etiologia , Inflamação/patologia
3.
Surg Endosc ; 35(7): 4008-4015, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-32720177

RESUMO

BACKGROUND: Artificial intelligence (AI) and computer vision (CV) have revolutionized image analysis. In surgery, CV applications have focused on surgical phase identification in laparoscopic videos. We proposed to apply CV techniques to identify phases in an endoscopic procedure, peroral endoscopic myotomy (POEM). METHODS: POEM videos were collected from Massachusetts General and Showa University Koto Toyosu Hospitals. Videos were labeled by surgeons with the following ground truth phases: (1) Submucosal injection, (2) Mucosotomy, (3) Submucosal tunnel, (4) Myotomy, and (5) Mucosotomy closure. The deep-learning CV model-Convolutional Neural Network (CNN) plus Long Short-Term Memory (LSTM)-was trained on 30 videos to create POEMNet. We then used POEMNet to identify operative phases in the remaining 20 videos. The model's performance was compared to surgeon annotated ground truth. RESULTS: POEMNet's overall phase identification accuracy was 87.6% (95% CI 87.4-87.9%). When evaluated on a per-phase basis, the model performed well, with mean unweighted and prevalence-weighted F1 scores of 0.766 and 0.875, respectively. The model performed best with longer phases, with 70.6% accuracy for phases that had a duration under 5 min and 88.3% accuracy for longer phases. DISCUSSION: A deep-learning-based approach to CV, previously successful in laparoscopic video phase identification, translates well to endoscopic procedures. With continued refinements, AI could contribute to intra-operative decision-support systems and post-operative risk prediction.


Assuntos
Acalasia Esofágica , Laparoscopia , Miotomia , Cirurgia Endoscópica por Orifício Natural , Inteligência Artificial , Acalasia Esofágica/cirurgia , Humanos , Redes Neurais de Computação
4.
Surg Endosc ; 35(9): 4918-4929, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34231065

RESUMO

BACKGROUND: The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration. METHODS: Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups. RESULTS: After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established. CONCLUSIONS: While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration.


Assuntos
Aprendizado de Máquina , Consenso , Técnica Delphi , Humanos , Inquéritos e Questionários
5.
Anesthesiology ; 132(2): 379-394, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31939856

RESUMO

Artificial intelligence has been advancing in fields including anesthesiology. This scoping review of the intersection of artificial intelligence and anesthesia research identified and summarized six themes of applications of artificial intelligence in anesthesiology: (1) depth of anesthesia monitoring, (2) control of anesthesia, (3) event and risk prediction, (4) ultrasound guidance, (5) pain management, and (6) operating room logistics. Based on papers identified in the review, several topics within artificial intelligence were described and summarized: (1) machine learning (including supervised, unsupervised, and reinforcement learning), (2) techniques in artificial intelligence (e.g., classical machine learning, neural networks and deep learning, Bayesian methods), and (3) major applied fields in artificial intelligence.The implications of artificial intelligence for the practicing anesthesiologist are discussed as are its limitations and the role of clinicians in further developing artificial intelligence for use in clinical care. Artificial intelligence has the potential to impact the practice of anesthesiology in aspects ranging from perioperative support to critical care delivery to outpatient pain management.


Assuntos
Anestesiologia/métodos , Inteligência Artificial , Monitorização Intraoperatória/métodos , Anestesiologia/tendências , Inteligência Artificial/tendências , Aprendizado Profundo/tendências , Humanos , Aprendizado de Máquina/tendências , Monitorização Intraoperatória/tendências , Redes Neurais de Computação
6.
Ann Surg ; 270(3): 414-421, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31274652

RESUMO

OBJECTIVE(S): To develop and assess AI algorithms to identify operative steps in laparoscopic sleeve gastrectomy (LSG). BACKGROUND: Computer vision, a form of artificial intelligence (AI), allows for quantitative analysis of video by computers for identification of objects and patterns, such as in autonomous driving. METHODS: Intraoperative video from LSG from an academic institution was annotated by 2 fellowship-trained, board-certified bariatric surgeons. Videos were segmented into the following steps: 1) port placement, 2) liver retraction, 3) liver biopsy, 4) gastrocolic ligament dissection, 5) stapling of the stomach, 6) bagging specimen, and 7) final inspection of staple line. Deep neural networks were used to analyze videos. Accuracy of operative step identification by the AI was determined by comparing to surgeon annotations. RESULTS: Eighty-eight cases of LSG were analyzed. A random 70% sample of these clips was used to train the AI and 30% to test the AI's performance. Mean concordance correlation coefficient for human annotators was 0.862, suggesting excellent agreement. Mean (±SD) accuracy of the AI in identifying operative steps in the test set was 82% ±â€Š4% with a maximum of 85.6%. CONCLUSIONS: AI can extract quantitative surgical data from video with 85.6% accuracy. This suggests operative video could be used as a quantitative data source for research in intraoperative clinical decision support, risk prediction, or outcomes studies.


Assuntos
Inteligência Artificial , Gastrectomia/métodos , Laparoscopia/métodos , Gravação em Vídeo/estatística & dados numéricos , Cirurgia Vídeoassistida/métodos , Centros Médicos Acadêmicos , Adulto , Automação , Bases de Dados Factuais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Monitorização Intraoperatória/métodos , Variações Dependentes do Observador , Duração da Cirurgia , Estudos Retrospectivos , Sensibilidade e Especificidade
7.
Ann Surg ; 268(1): 70-76, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29389679

RESUMO

OBJECTIVE: The aim of this review was to summarize major topics in artificial intelligence (AI), including their applications and limitations in surgery. This paper reviews the key capabilities of AI to help surgeons understand and critically evaluate new AI applications and to contribute to new developments. SUMMARY BACKGROUND DATA: AI is composed of various subfields that each provide potential solutions to clinical problems. Each of the core subfields of AI reviewed in this piece has also been used in other industries such as the autonomous car, social networks, and deep learning computers. METHODS: A review of AI papers across computer science, statistics, and medical sources was conducted to identify key concepts and techniques within AI that are driving innovation across industries, including surgery. Limitations and challenges of working with AI were also reviewed. RESULTS: Four main subfields of AI were defined: (1) machine learning, (2) artificial neural networks, (3) natural language processing, and (4) computer vision. Their current and future applications to surgical practice were introduced, including big data analytics and clinical decision support systems. The implications of AI for surgeons and the role of surgeons in advancing the technology to optimize clinical effectiveness were discussed. CONCLUSIONS: Surgeons are well positioned to help integrate AI into modern practice. Surgeons should partner with data scientists to capture data across phases of care and to provide clinical context, for AI has the potential to revolutionize the way surgery is taught and practiced with the promise of a future optimized for the highest quality patient care.


Assuntos
Inteligência Artificial , Procedimentos Cirúrgicos Operatórios/métodos , Humanos , Papel do Médico , Cirurgiões
9.
IEEE Trans Med Imaging ; 43(1): 264-274, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37498757

RESUMO

Analysis of relations between objects and comprehension of abstract concepts in the surgical video is important in AI-augmented surgery. However, building models that integrate our knowledge and understanding of surgery remains a challenging endeavor. In this paper, we propose a novel way to integrate conceptual knowledge into temporal analysis tasks using temporal concept graph networks. In the proposed networks, a knowledge graph is incorporated into the temporal video analysis of surgical notions, learning the meaning of concepts and relations as they apply to the data. We demonstrate results in surgical video data for tasks such as verification of the critical view of safety, estimation of the Parkland grading scale as well as recognizing instrument-action-tissue triplets. The results show that our method improves the recognition and detection of complex benchmarks as well as enables other analytic applications of interest.


Assuntos
Redes Neurais de Computação , Procedimentos Cirúrgicos Operatórios , Gravação em Vídeo
10.
Ann Surg ; 268(6): e47-e48, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-28837447

Assuntos
Big Data
11.
Comput Assist Surg (Abingdon) ; 26(1): 58-68, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34126014

RESUMO

Annotation of surgical video is important for establishing ground truth in surgical data science endeavors that involve computer vision. With the growth of the field over the last decade, several challenges have been identified in annotating spatial, temporal, and clinical elements of surgical video as well as challenges in selecting annotators. In reviewing current challenges, we provide suggestions on opportunities for improvement and possible next steps to enable translation of surgical data science efforts in surgical video analysis to clinical research and practice.

12.
Surgery ; 169(5): 1253-1256, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33272610

RESUMO

The fields of computer vision (CV) and artificial intelligence (AI) have undergone rapid advancements in the past decade, many of which have been applied to the analysis of intraoperative video. These advances are driven by wide-spread application of deep learning, which leverages multiple layers of neural networks to teach computers complex tasks. Prior to these advances, applications of AI in the operating room were limited by our relative inability to train computers to accurately understand images with traditional machine learning (ML) techniques. The development and refining of deep neural networks that can now accurately identify objects in images and remember past surgical events has sparked a surge in the applications of CV to analyze intraoperative video and has allowed for the accurate identification of surgical phases (steps) and instruments across a variety of procedures. In some cases, CV can even identify operative phases with accuracy similar to surgeons. Future research will likely expand on this foundation of surgical knowledge using larger video datasets and improved algorithms with greater accuracy and interpretability to create clinically useful AI models that gain widespread adoption and augment the surgeon's ability to provide safer care for patients everywhere.


Assuntos
Inteligência Artificial , Cirurgia Geral
13.
IEEE Trans Pattern Anal Mach Intell ; 40(1): 235-249, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28166490

RESUMO

Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches utilize these regularities via the restrictive, and rather local, Manhattan World (MW) assumption which posits that every plane is perpendicular to one of the axes of a single coordinate system. The aforementioned regularities are especially evident in the surface normal distribution of a scene where they manifest as orthogonally-coupled clusters. This motivates the introduction of the Manhattan-Frame (MF) model which captures the notion of an MW in the surface normals space, the unit sphere, and two probabilistic MF models over this space. First, for a single MF we propose novel real-time MAP inference algorithms, evaluate their performance and their use in drift-free rotation estimation. Second, to capture the complexity of real-world scenes at a global scale, we extend the MF model to a probabilistic mixture of Manhattan Frames (MMF). For MMF inference we propose a simple MAP inference algorithm and an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that let us infer the unknown number of mixture components. We demonstrate the versatility of the MMF model and inference algorithm across several scales of man-made environments.

14.
IEEE Trans Pattern Anal Mach Intell ; 37(8): 1585-601, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26352997

RESUMO

Segmenting an image into an arbitrary number of coherent regions is at the core of image understanding. Many formulations of the segmentation problem have been suggested over the past years. These formulations include, among others, axiomatic functionals, which are hard to implement and analyze, and graph-based alternatives, which impose a non-geometric metric on the problem. We propose a novel method for segmenting an image into an arbitrary number of regions using an axiomatic variational approach. The proposed method allows to incorporate various generic region appearance models, while avoiding metrication errors. In the suggested framework, the segmentation is performed by level set evolution. Yet, contrarily to most existing methods, here, multiple regions are represented by a single non-negative level set function. The level set function evolution is efficiently executed through the Voronoi Implicit Interface Method for multi-phase interface evolution. The proposed approach is shown to obtain accurate segmentation results for various natural 2D and 3D images, comparable to state-of-the-art image segmentation algorithms.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA