Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
J Med Internet Res ; 26: e54375, 2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38787601

RESUMO

BACKGROUND: With the development of emerging technologies, digital behavior change interventions (DBCIs) help to maintain regular physical activity in daily life. OBJECTIVE: To comprehensively understand the design implementations of habit formation techniques in current DBCIs, a systematic review was conducted to investigate the implementations of behavior change techniques, types of habit formation techniques, and design strategies in current DBCIs. METHODS: The process of this review followed the PRISMA (Preferred Reporting Item for Systematic Reviews and Meta-Analyses) guidelines. A total of 4 databases were systematically searched from 2012 to 2022, which included Web of Science, Scopus, ACM Digital Library, and PubMed. The inclusion criteria encompassed studies that used digital tools for physical activity, examined behavior change intervention techniques, and were written in English. RESULTS: A total of 41 identified research articles were included in this review. The results show that the most applied behavior change techniques were the self-monitoring of behavior, goal setting, and prompts and cues. Moreover, habit formation techniques were identified and developed based on intentions, cues, and positive reinforcement. Commonly used methods included automatic monitoring, descriptive feedback, general guidelines, self-set goals, time-based cues, and virtual rewards. CONCLUSIONS: A total of 32 commonly design strategies of habit formation techniques were summarized and mapped to the proposed conceptual framework, which was categorized into target-mediated (generalization and personalization) and technology-mediated interactions (explicitness and implicitness). Most of the existing studies use the explicit interaction, aligning with the personalized habit formation techniques in the design strategies of DBCIs. However, implicit interaction design strategies are lacking in the reviewed studies. The proposed conceptual framework and potential solutions can serve as guidelines for designing strategies aimed at habit formation within DBCIs.


Assuntos
Hábitos , Humanos , Terapia Comportamental/métodos , Exercício Físico , Comportamentos Relacionados com a Saúde
2.
Int J Comput Assist Radiol Surg ; 19(5): 821-829, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38658450

RESUMO

PURPOSE: The healthcare industry has a growing need for realistic modeling and efficient simulation of surgical scenes. With effective models of deformable surgical scenes, clinicians are able to conduct surgical planning and surgery training on scenarios close to real-world cases. However, a significant challenge in achieving such a goal is the scarcity of high-quality soft tissue models with accurate shapes and textures. To address this gap, we present a data-driven framework that leverages emerging neural radiance field technology to enable high-quality surgical reconstruction and explore its application for surgical simulations. METHOD: We first focus on developing a fast NeRF-based surgical scene 3D reconstruction approach that achieves state-of-the-art performance. This method can significantly outperform traditional 3D reconstruction methods, which have failed to capture large deformations and produce fine-grained shapes and textures. We then propose an automated creation pipeline of interactive surgical simulation environments through a closed mesh extraction algorithm. RESULTS: Our experiments have validated the superior performance and efficiency of our proposed approach in surgical scene 3D reconstruction. We further utilize our reconstructed soft tissues to conduct FEM and MPM simulations, showcasing the practical application of our method in data-driven surgical simulations. CONCLUSION: We have proposed a novel NeRF-based reconstruction framework with an emphasis on simulation purposes. Our reconstruction framework facilitates the efficient creation of high-quality surgical soft tissue 3D models. With multiple soft tissue simulations demonstrated, we show that our work has the potential to benefit downstream clinical tasks, such as surgical education.


Assuntos
Simulação por Computador , Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Algoritmos , Cirurgia Assistida por Computador/métodos
3.
IEEE Trans Biomed Eng ; 70(2): 488-500, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35905063

RESUMO

OBJECTIVE: The computation of anatomical information and laparoscope position is a fundamental block of surgical navigation in Minimally Invasive Surgery (MIS). Recovering a dense 3D structure of surgical scene using visual cues remains a challenge, and the online laparoscopic tracking primarily relies on external sensors, which increases system complexity. METHODS: Here, we propose a learning-driven framework, in which an image-guided laparoscopic localization with 3D reconstructions of anatomical structures is obtained. To reconstruct the structure of the whole surgical environment, we first fine-tune a learning-based stereoscopic depth perception method, which is robust to texture-less and variant soft tissues, for depth estimation. Then, we develop a dense reconstruction algorithm to represent the scene by surfels, estimate the laparoscope poses and fuse the depth into a unified reference coordinate for tissue reconstruction. To estimate poses of new laparoscope views, we achieve a coarse-to-fine localization method, which incorporates our reconstructed model. RESULTS: We evaluate the reconstruction method and the localization module on three datasets, namely, the stereo correspondence and reconstruction of endoscopic data (SCARED), the ex-vivo data collected with Universal Robot (UR) and Karl Storz Laparoscope, and the in-vivo DaVinci robotic surgery dataset, where the reconstructed structures have rich details of surface texture with an error under 1.71 mm and the localization module can accurately track the laparoscope with images as input. CONCLUSIONS: Experimental results demonstrate the superior performance of the proposed method in anatomy reconstruction and laparoscopic localization. SIGNIFICANCE: The proposed framework can be potentially extended to the current surgical navigation system.


Assuntos
Laparoscopia , Cirurgia Assistida por Computador , Laparoscópios , Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Laparoscopia/métodos , Algoritmos , Cirurgia Assistida por Computador/métodos
4.
Nat Commun ; 14(1): 6676, 2023 10 21.
Artigo em Inglês | MEDLINE | ID: mdl-37865629

RESUMO

Recent advancements in artificial intelligence have witnessed human-level performance; however, AI-enabled cognitive assistance for therapeutic procedures has not been fully explored nor pre-clinically validated. Here we propose AI-Endo, an intelligent surgical workflow recognition suit, for endoscopic submucosal dissection (ESD). Our AI-Endo is trained on high-quality ESD cases from an expert endoscopist, covering a decade time expansion and consisting of 201,026 labeled frames. The learned model demonstrates outstanding performance on validation data, including cases from relatively junior endoscopists with various skill levels, procedures conducted with different endoscopy systems and therapeutic skills, and cohorts from international multi-centers. Furthermore, we integrate our AI-Endo with the Olympus endoscopic system and validate the AI-enabled cognitive assistance system with animal studies in live ESD training sessions. Dedicated data analysis from surgical phase recognition results is summarized in an automatically generated report for skill assessment.


Assuntos
Endometriose , Ressecção Endoscópica de Mucosa , Animais , Feminino , Humanos , Ressecção Endoscópica de Mucosa/educação , Ressecção Endoscópica de Mucosa/métodos , Inteligência Artificial , Fluxo de Trabalho , Endoscopia , Aprendizagem
5.
Comput Methods Programs Biomed ; 236: 107561, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37119774

RESUMO

BACKGROUND AND OBJECTIVE: In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS: The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS: Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION: The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.


Assuntos
Algoritmos , Procedimentos Cirúrgicos Robóticos , Humanos , Fluxo de Trabalho , Procedimentos Cirúrgicos Robóticos/métodos
6.
Med Image Anal ; 86: 102770, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889206

RESUMO

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Fluxo de Trabalho , Algoritmos , Aprendizado de Máquina
7.
Int J Comput Assist Radiol Surg ; 17(12): 2193-2202, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36129573

RESUMO

PURPOSE: Real-time surgical workflow analysis has been a key component for computer-assisted intervention system to improve cognitive assistance. Most existing methods solely rely on conventional temporal models and encode features with a successive spatial-temporal arrangement. Supportive benefits of intermediate features are partially lost from both visual and temporal aspects. In this paper, we rethink feature encoding to attend and preserve the critical information for accurate workflow recognition and anticipation. METHODS: We introduce Transformer in surgical workflow analysis, to reconsider complementary effects of spatial and temporal representations. We propose a hybrid embedding aggregation Transformer, named Trans-SVNet, to effectively interact with the designed spatial and temporal embeddings, by employing spatial embedding to query temporal embedding sequence. We jointly optimized by loss objectives from both analysis tasks to leverage their high correlation. RESULTS: We extensively evaluate our method on three large surgical video datasets. Our method consistently outperforms the state-of-the-arts across three datasets on workflow recognition task. Jointly learning with anticipation, recognition results can gain a large improvement. Our approach also shows its effectiveness on anticipation with promising performance achieved. Our model achieves a real-time inference speed of 0.0134 second per frame. CONCLUSION: Experimental results demonstrate the efficacy of our hybrid embeddings integration by rediscovering the crucial cues from complementary spatial-temporal embeddings. The better performance by multi-task learning indicates that anticipation task brings the additional knowledge to recognition task. Promising effectiveness and efficiency of our method also show its promising potential to be used in operating room.


Assuntos
Salas Cirúrgicas , Humanos , Fluxo de Trabalho
8.
Front Oncol ; 12: 868186, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35936706

RESUMO

Background: Lung cancer is the leading cause of cancer-related mortality, and accurate prediction of patient survival can aid treatment planning and potentially improve outcomes. In this study, we proposed an automated system capable of lung segmentation and survival prediction using graph convolution neural network (GCN) with CT data in non-small cell lung cancer (NSCLC) patients. Methods: In this retrospective study, we segmented 10 parts of the lung CT images and built individual lung graphs as inputs to train a GCN model to predict 5-year overall survival. A Cox proportional-hazard model, a set of machine learning (ML) models, a convolutional neural network based on tumor (Tumor-CNN), and the current TNM staging system were used as comparison. Findings: A total of 1,705 patients (main cohort) and 125 patients (external validation cohort) with lung cancer (stages I and II) were included. The GCN model was significantly predictive of 5-year overall survival with an AUC of 0.732 (p < 0.0001). The model stratified patients into low- and high-risk groups, which were associated with overall survival (HR = 5.41; 95% CI:, 2.32-10.14; p < 0.0001). On external validation dataset, our GCN model achieved the AUC score of 0.678 (95% CI: 0.564-0.792; p < 0.0001). Interpretation: The proposed GCN model outperformed all ML, Tumor-CNN, and TNM staging models. This study demonstrated the value of utilizing medical imaging graph structure data, resulting in a robust and effective model for the prediction of survival in early-stage lung cancer.

9.
IEEE Trans Med Imaging ; 40(7): 1911-1923, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33780335

RESUMO

Automatic surgical workflow recognition is a key component for developing context-aware computer-assisted systems in the operating theatre. Previous works either jointly modeled the spatial features with short fixed-range temporal information, or separately learned visual and long temporal cues. In this paper, we propose a novel end-to-end temporal memory relation network (TMRNet) for relating long-range and multi-scale temporal patterns to augment the present features. We establish a long-range memory bank to serve as a memory cell storing the rich supportive information. Through our designed temporal variation layer, the supportive cues are further enhanced by multi-scale temporal-only convolutions. To effectively incorporate the two types of cues without disturbing the joint learning of spatio-temporal features, we introduce a non-local bank operator to attentively relate the past to the present. In this regard, our TMRNet enables the current feature to view the long-range temporal dependency, as well as tolerate complex temporal extents. We have extensively validated our approach on two benchmark surgical video datasets, M2CAI challenge dataset and Cholec80 dataset. Experimental results demonstrate the outstanding performance of our method, consistently exceeding the state-of-the-art methods by a large margin (e.g., 67.0% v.s. 78.9% Jaccard on Cholec80 dataset).


Assuntos
Sistemas Computacionais , Fluxo de Trabalho
10.
Comput Methods Programs Biomed ; 212: 106452, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34688174

RESUMO

BACKGROUND AND OBJECTIVE: Automatic surgical workflow recognition is an essential step in developing context-aware computer-assisted surgical systems. Video recordings of surgeries are becoming widely accessible, as the operational field view is captured during laparoscopic surgeries. Head and ceiling mounted cameras are also increasingly being used to record videos in open surgeries. This makes videos a common choice in surgical workflow recognition. Additional modalities, such as kinematic data captured during robot-assisted surgeries, could also improve workflow recognition. This paper presents the design and results of the MIcro-Surgical Anastomose Workflow recognition on training sessions (MISAW) challenge whose objective was to develop workflow recognition models based on kinematic data and/or videos. METHODS: The MISAW challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels. This data set was composed of videos, kinematics, and workflow annotations. The latter described the sequences at three different granularity levels: phase, step, and activity. Four tasks were proposed to the participants: three of them were related to the recognition of surgical workflow at three different granularity levels, while the last one addressed the recognition of all granularity levels in the same model. We used the average application-dependent balanced accuracy (AD-Accuracy) as the evaluation metric. This takes unbalanced classes into account and it is more clinically relevant than a frame-by-frame score. RESULTS: Six teams participated in at least one task. All models employed deep learning models, such as convolutional neural networks (CNN), recurrent neural networks (RNN), or a combination of both. The best models achieved accuracy above 95%, 80%, 60%, and 75% respectively for recognition of phases, steps, activities, and multi-granularity. The RNN-based models outperformed the CNN-based ones as well as the dedicated modality models compared to the multi-granularity except for activity recognition. CONCLUSION: For high levels of granularity, the best models had a recognition rate that may be sufficient for applications such as prediction of remaining surgical time. However, for activities, the recognition rate was still low for applications that can be employed clinically. The MISAW data set is publicly available at http://www.synapse.org/MISAW to encourage further research in surgical workflow recognition.


Assuntos
Laparoscopia , Procedimentos Cirúrgicos Robóticos , Anastomose Cirúrgica , Humanos , Redes Neurais de Computação , Fluxo de Trabalho
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA