Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Med Biol Eng Comput ; 2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38848031

RESUMO

Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. For this task, the deep learning techniques' black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett's esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett's esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem.

2.
Endoscopy ; 56(9): 641-649, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38547927

RESUMO

BACKGROUND: This study evaluated the effect of an artificial intelligence (AI)-based clinical decision support system on the performance and diagnostic confidence of endoscopists in their assessment of Barrett's esophagus (BE). METHODS: 96 standardized endoscopy videos were assessed by 22 endoscopists with varying degrees of BE experience from 12 centers. Assessment was randomized into two video sets: group A (review first without AI and second with AI) and group B (review first with AI and second without AI). Endoscopists were required to evaluate each video for the presence of Barrett's esophagus-related neoplasia (BERN) and then decide on a spot for a targeted biopsy. After the second assessment, they were allowed to change their clinical decision and confidence level. RESULTS: AI had a stand-alone sensitivity, specificity, and accuracy of 92.2%, 68.9%, and 81.3%, respectively. Without AI, BE experts had an overall sensitivity, specificity, and accuracy of 83.3%, 58.1%, and 71.5%, respectively. With AI, BE nonexperts showed a significant improvement in sensitivity and specificity when videos were assessed a second time with AI (sensitivity 69.8% [95%CI 65.2%-74.2%] to 78.0% [95%CI 74.0%-82.0%]; specificity 67.3% [95%CI 62.5%-72.2%] to 72.7% [95%CI 68.2%-77.3%]). In addition, the diagnostic confidence of BE nonexperts improved significantly with AI. CONCLUSION: BE nonexperts benefitted significantly from additional AI. BE experts and nonexperts remained significantly below the stand-alone performance of AI, suggesting that there may be other factors influencing endoscopists' decisions to follow or discard AI advice.


Assuntos
Inteligência Artificial , Esôfago de Barrett , Sistemas de Apoio a Decisões Clínicas , Neoplasias Esofágicas , Esofagoscopia , Humanos , Esôfago de Barrett/diagnóstico , Biópsia , Competência Clínica , Neoplasias Esofágicas/diagnóstico , Esofagoscopia/métodos , Sensibilidade e Especificidade , Gravação em Vídeo
3.
JPRAS Open ; 39: 330-343, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38390355

RESUMO

Background: The utilization of three-dimensional (3D) surface imaging for facial anthropometry is a significant asset for patients undergoing maxillofacial surgery. Notably, there have been recent advancements in smartphone technology that enable 3D surface imaging.In this study, anthropometric assessments of the face were performed using a smartphone and a sophisticated 3D surface imaging system. Methods: 30 healthy volunteers (15 females and 15 males) were included in the study. An iPhone 14 Pro (Apple Inc., USA) using the application 3D Scanner App (Laan Consulting Corp., USA) and the Vectra M5 (Canfield Scientific, USA) were employed to create 3D surface models. For each participant, 19 anthropometric measurements were conducted on the 3D surface models. Subsequently, the anthropometric measurements generated by the two approaches were compared. The statistical techniques employed included the paired t-test, paired Wilcoxon signed-rank test, Bland-Altman analysis, and calculation of the intraclass correlation coefficient (ICC). Results: All measurements showed excellent agreement between smartphone-based and Vectra M5-based measurements (ICC between 0.85 and 0.97). Statistical analysis revealed no statistically significant differences in the central tendencies for 17 of the 19 linear measurements. Despite the excellent agreement found, Bland-Altman analysis revealed that the 95% limits of agreement between the two methods exceeded ±3 mm for the majority of measurements. Conclusion: Digital facial anthropometry using smartphones can serve as a valuable supplementary tool for surgeons, enhancing their communication with patients. However, the proposed data suggest that digital facial anthropometry using smartphones may not yet be suitable for certain diagnostic purposes that require high accuracy.

4.
Comput Biol Med ; 169: 107929, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38184862

RESUMO

In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking", resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.


Assuntos
Procedimentos Cirúrgicos Robóticos , Cirurgia Assistida por Computador , Endoscopia , Procedimentos Cirúrgicos Minimamente Invasivos , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos , Processamento de Imagem Assistida por Computador/métodos
5.
Plast Reconstr Surg ; 152(4): 670e-674e, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-36952590

RESUMO

SUMMARY: Digital-nerve lesions result in a loss of tactile sensation reflected by an anesthetic area (AA) at the radial or ulnar aspect of the respective digit. Available tools to monitor the recovery of tactile sense have been criticized for their lack of validity. Precise quantification of AA dynamics by three-dimensional (3D) imaging could serve as an accurate surrogate to monitor recovery after digital-nerve repair. For validation, AAs were marked on digits of healthy volunteers to simulate the AA of an impaired cutaneous innervation. The 3D models were composed from raw images that had been acquired with a 3D camera to precisely quantify relative AA for each digit (3D models, n = 80). Operator properties varied with regard to individual experience in 3D imaging and image processing. In addition, the concept was applied in a clinical case study. Results showed that images taken by experienced photographers were rated as better quality ( P < 0.001) and needed less processing time ( P = 0.020). Quantification of the relative AA was not altered significantly, regardless of experience level of the photographer ( P = 0.425) or image assembler ( P = 0.749). The proposed concept allows precise and reliable surface quantification of digits and can be performed consistently without relevant distortion by lack of examiner experience. Routine 3D imaging of the AA has the great potential to provide visual evidence of various returning states of sensation and to convert sensory nerve recovery into a metric variable with high responsiveness to temporal progress.


Assuntos
Sensação , Percepção do Tato , Humanos , Tato , Processamento de Imagem Assistida por Computador , Pele , Imageamento Tridimensional/métodos
6.
Gastrointest Endosc ; 97(5): 911-916, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36646146

RESUMO

BACKGROUND AND AIMS: Celiac disease with its endoscopic manifestation of villous atrophy (VA) is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of VA at routine EGD may improve diagnostic performance. METHODS: A dataset of 858 endoscopic images of 182 patients with VA and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet18 deep learning model to detect VA. An external dataset was used to test the algorithm, in addition to 6 fellows and 4 board-certified gastroenterologists. Fellows could consult the AI algorithm's result during the test. From their consultation distribution, a stratification of test images into "easy" and "difficult" was performed and used for classified performance measurement. RESULTS: External validation of the AI algorithm yielded values of 90%, 76%, and 84% for sensitivity, specificity, and accuracy, respectively. Fellows scored corresponding values of 63%, 72%, and 67% and experts scored 72%, 69%, and 71%, respectively. AI consultation significantly improved all trainee performance statistics. Although fellows and experts showed significantly lower performance for difficult images, the performance of the AI algorithm was stable. CONCLUSIONS: In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of VA on endoscopic still images. AI decision support significantly improved the performance of nonexpert endoscopists. The stable performance on difficult images suggests a further positive add-on effect in challenging cases.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Endoscopia Gastrointestinal , Algoritmos , Atrofia
7.
J Clin Med ; 11(17)2022 Aug 25.
Artigo em Inglês | MEDLINE | ID: mdl-36078928

RESUMO

BACKGROUND: Reliable, time- and cost-effective, and clinician-friendly diagnostic tools are cornerstones in facial palsy (FP) patient management. Different automated FP grading systems have been developed but revealed persisting downsides such as insufficient accuracy and cost-intensive hardware. We aimed to overcome these barriers and programmed an automated grading system for FP patients utilizing the House and Brackmann scale (HBS). METHODS: Image datasets of 86 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany, between June 2017 and May 2021, were used to train the neural network and evaluate its accuracy. Nine facial poses per patient were analyzed by the algorithm. RESULTS: The algorithm showed an accuracy of 100%. Oversampling did not result in altered outcomes, while the direct form displayed superior accuracy levels when compared to the modular classification form (n = 86; 100% vs. 99%). The Early Fusion technique was linked to improved accuracy outcomes in comparison to the Late Fusion and sequential method (n = 86; 100% vs. 96% vs. 97%). CONCLUSIONS: Our automated FP grading system combines high-level accuracy with cost- and time-effectiveness. Our algorithm may accelerate the grading process in FP patients and facilitate the FP surgeon's workflow.

8.
Gut ; 71(12): 2388-2390, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36109151

RESUMO

In this study, we aimed to develop an artificial intelligence clinical decision support solution to mitigate operator-dependent limitations during complex endoscopic procedures such as endoscopic submucosal dissection and peroral endoscopic myotomy, for example, bleeding and perforation. A DeepLabv3-based model was trained to delineate vessels, tissue structures and instruments on endoscopic still images from such procedures. The mean cross-validated Intersection over Union and Dice Score were 63% and 76%, respectively. Applied to standardised video clips from third-space endoscopic procedures, the algorithm showed a mean vessel detection rate of 85% with a false-positive rate of 0.75/min. These performance statistics suggest a potential clinical benefit for procedure safety, time and also training.


Assuntos
Aprendizado Profundo , Ressecção Endoscópica de Mucosa , Humanos , Inteligência Artificial , Endoscopia Gastrointestinal
9.
Sci Rep ; 12(1): 11115, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35778456

RESUMO

The endoscopic features associated with eosinophilic esophagitis (EoE) may be missed during routine endoscopy. We aimed to develop and evaluate an Artificial Intelligence (AI) algorithm for detecting and quantifying the endoscopic features of EoE in white light images, supplemented by the EoE Endoscopic Reference Score (EREFS). An AI algorithm (AI-EoE) was constructed and trained to differentiate between EoE and normal esophagus using endoscopic white light images extracted from the database of the University Hospital Augsburg. In addition to binary classification, a second algorithm was trained with specific auxiliary branches for each EREFS feature (AI-EoE-EREFS). The AI algorithms were evaluated on an external data set from the University of North Carolina, Chapel Hill (UNC), and compared with the performance of human endoscopists with varying levels of experience. The overall sensitivity, specificity, and accuracy of AI-EoE were 0.93 for all measures, while the AUC was 0.986. With additional auxiliary branches for the EREFS categories, the AI algorithm (AI-EoE-EREFS) performance improved to 0.96, 0.94, 0.95, and 0.992 for sensitivity, specificity, accuracy, and AUC, respectively. AI-EoE and AI-EoE-EREFS performed significantly better than endoscopy beginners and senior fellows on the same set of images. An AI algorithm can be trained to detect and quantify endoscopic features of EoE with excellent performance scores. The addition of the EREFS criteria improved the performance of the AI algorithm, which performed significantly better than endoscopists with a lower or medium experience level.


Assuntos
Esofagite Eosinofílica , Inteligência Artificial , Esofagite Eosinofílica/diagnóstico , Esofagoscopia/métodos , Humanos , Índice de Gravidade de Doença
12.
Artigo em Inglês | MEDLINE | ID: mdl-34172253

RESUMO

The evaluation and assessment of Barrett's esophagus is challenging for both expert and nonexpert endoscopists. However, the early diagnosis of cancer in Barrett's esophagus is crucial for its prognosis, and could save costs. Pre-clinical and clinical studies on the application of Artificial Intelligence (AI) in Barrett's esophagus have shown promising results. In this review, we focus on the current challenges and future perspectives of implementing AI systems in the management of patients with Barrett's esophagus.


Assuntos
Inteligência Artificial/normas , Esôfago de Barrett/diagnóstico , Aprendizado Profundo/normas , Endoscopia/métodos , Humanos , Prognóstico
13.
Comput Biol Med ; 135: 104578, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34171639

RESUMO

Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of early-cancerous tissues in Barrett's esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts' previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts' delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model's sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts' insights, demonstrating how human knowledge may influence the correct computational learning.


Assuntos
Esôfago de Barrett , Inteligência Artificial , Esôfago de Barrett/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Reprodutibilidade dos Testes
14.
Arch Gynecol Obstet ; 303(3): 721-728, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33184690

RESUMO

PURPOSE: In this trial, we used a previously developed prototype software to assess aesthetic results after reconstructive surgery for congenital breast asymmetry using automated anthropometry. To prove the consensus between the manual and automatic digital measurements, we evaluated the software by comparing the manual and automatic measurements of 46 breasts. METHODS: Twenty-three patients who underwent reconstructive surgery for congenital breast asymmetry at our institution were examined and underwent 3D surface imaging. Per patient, 14 manual and 14 computer-based anthropometric measurements were obtained according to a standardized protocol. Manual and automatic measurements, as well as the previously proposed Symmetry Index (SI), were compared. RESULTS: The Wilcoxon signed-rank test revealed no significant differences in six of the seven measurements between the automatic and manual assessments. The SI showed robust agreement between the automatic and manual methods. CONCLUSION: The present trial validates our method for digital anthropometry. Despite the discrepancy in one measurement, all remaining measurements, including the SI, showed high agreement between the manual and automatic methods. The proposed data bring us one step closer to the long-term goal of establishing robust instruments to evaluate the results of breast surgery. LEVEL OF EVIDENCE: IV.


Assuntos
Mama/anatomia & histologia , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Adulto , Antropometria/métodos , Estética , Feminino , Humanos , Mastectomia , Reprodutibilidade dos Testes , Software
15.
Endoscopy ; 53(9): 878-883, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33197942

RESUMO

BACKGROUND: The accurate differentiation between T1a and T1b Barrett's-related cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an artificial intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett's cancer on white-light images. METHODS: Endoscopic images from three tertiary care centers in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) were evaluated using the AI system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett's cancer. RESULTS: The sensitivity, specificity, F1 score, and accuracy of the AI system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.74, and 0.71, respectively. There was no statistically significant difference between the performance of the AI system and that of experts, who showed sensitivity, specificity, F1, and accuracy of 0.63, 0.78, 0.67, and 0.70, respectively. CONCLUSION: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett's cancer. AI scored equally to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and real-life settings. Nevertheless, the correct prediction of submucosal invasion in Barrett's cancer remains challenging for both experts and AI.


Assuntos
Adenocarcinoma , Esôfago de Barrett , Neoplasias Esofágicas , Adenocarcinoma/diagnóstico por imagem , Inteligência Artificial , Esôfago de Barrett/diagnóstico por imagem , Neoplasias Esofágicas/diagnóstico por imagem , Esofagoscopia , Humanos , Projetos Piloto , Estudos Retrospectivos
16.
Comput Biol Med ; 126: 104029, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33059236

RESUMO

Barrett's esophagus figured a swift rise in the number of cases in the past years. Although traditional diagnosis methods offered a vital role in early-stage treatment, they are generally time- and resource-consuming. In this context, computer-aided approaches for automatic diagnosis emerged in the literature since early detection is intrinsically related to remission probabilities. However, they still suffer from drawbacks because of the lack of available data for machine learning purposes, thus implying reduced recognition rates. This work introduces Generative Adversarial Networks to generate high-quality endoscopic images, thereby identifying Barrett's esophagus and adenocarcinoma more precisely. Further, Convolution Neural Networks are used for feature extraction and classification purposes. The proposed approach is validated over two datasets of endoscopic images, with the experiments conducted over the full and patch-split images. The application of Deep Convolutional Generative Adversarial Networks for the data augmentation step and LeNet-5 and AlexNet for the classification step allowed us to validate the proposed methodology over an extensive set of datasets (based on original and augmented sets), reaching results of 90% of accuracy for the patch-based approach and 85% for the image-based approach. Both results are based on augmented datasets and are statistically different from the ones obtained in the original datasets of the same kind. Moreover, the impact of data augmentation was evaluated in the context of image description and classification, and the results obtained using synthetic images outperformed the ones over the original datasets, as well as other recent approaches from the literature. Such results suggest promising insights related to the importance of proper data for the accurate classification concerning computer-assisted Barrett's esophagus and adenocarcinoma detection.


Assuntos
Adenocarcinoma , Esôfago de Barrett , Neoplasias Esofágicas , Adenocarcinoma/diagnóstico por imagem , Esôfago de Barrett/diagnóstico por imagem , Endoscopia , Neoplasias Esofágicas/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
17.
Gut ; 2020 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-33127833

RESUMO

OBJECTIVE: Artificial intelligence (AI) may reduce underdiagnosed or overlooked upper GI (UGI) neoplastic and preneoplastic conditions, due to subtle appearance and low disease prevalence. Only disease-specific AI performances have been reported, generating uncertainty on its clinical value. DESIGN: We searched PubMed, Embase and Scopus until July 2020, for studies on the diagnostic performance of AI in detection and characterisation of UGI lesions. Primary outcomes were pooled diagnostic accuracy, sensitivity and specificity of AI. Secondary outcomes were pooled positive (PPV) and negative (NPV) predictive values. We calculated pooled proportion rates (%), designed summary receiving operating characteristic curves with respective area under the curves (AUCs) and performed metaregression and sensitivity analysis. RESULTS: Overall, 19 studies on detection of oesophageal squamous cell neoplasia (ESCN) or Barrett's esophagus-related neoplasia (BERN) or gastric adenocarcinoma (GCA) were included with 218, 445, 453 patients and 7976, 2340, 13 562 images, respectively. AI-sensitivity/specificity/PPV/NPV/positive likelihood ratio/negative likelihood ratio for UGI neoplasia detection were 90% (CI 85% to 94%)/89% (CI 85% to 92%)/87% (CI 83% to 91%)/91% (CI 87% to 94%)/8.2 (CI 5.7 to 11.7)/0.111 (CI 0.071 to 0.175), respectively, with an overall AUC of 0.95 (CI 0.93 to 0.97). No difference in AI performance across ESCN, BERN and GCA was found, AUC being 0.94 (CI 0.52 to 0.99), 0.96 (CI 0.95 to 0.98), 0.93 (CI 0.83 to 0.99), respectively. Overall, study quality was low, with high risk of selection bias. No significant publication bias was found. CONCLUSION: We found a high overall AI accuracy for the diagnosis of any neoplastic lesion of the UGI tract that was independent of the underlying condition. This may be expected to substantially reduce the miss rate of precancerous lesions and early cancer when implemented in clinical practice.

18.
Aesthetic Plast Surg ; 44(6): 1980-1987, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32405724

RESUMO

BACKGROUND: Breast reconstruction is an important coping tool for patients undergoing a mastectomy. There are numerous surgical techniques in breast reconstruction surgery (BRS). Regardless of the technique used, creating a symmetric outcome is crucial for patients and plastic surgeons. Three-dimensional surface imaging enables surgeons and patients to assess the outcome's symmetry in BRS. To discriminate between autologous and alloplastic techniques, we analyzed both techniques using objective optical computerized symmetry analysis. Software was developed that enables clinicians to assess optical breast symmetry using three-dimensional surface imaging. METHODS: Twenty-seven patients who had undergone autologous (n = 12) or alloplastic (n = 15) BRS received three-dimensional surface imaging. Anthropomorphic data were collected digitally using semiautomatic measurements and automatic measurements. Automatic measurements were taken using the newly developed software. To quantify symmetry, a Symmetry Index is proposed. RESULTS: Statistical analysis revealed that there is no difference in the outcome symmetry between the two groups (t test for independent samples; p = 0.48, two-tailed). CONCLUSION: This study's findings provide a foundation for qualitative symmetry assessment in BRS using automatized digital anthropometry. In the present trial, no difference in the outcomes' optical symmetry was detected between autologous and alloplastic approaches. Level of evidence Level IV. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .


Assuntos
Neoplasias da Mama , Mamoplastia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/cirurgia , Estudos de Coortes , Estética , Humanos , Mastectomia , Estudos Retrospectivos , Medição de Risco , Resultado do Tratamento
19.
Quant Imaging Med Surg ; 10(2): 340-355, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32190561

RESUMO

BACKGROUND: For surgical fixation of bone fractures of the human hand, so-called Kirschner-wires (K-wires) are drilled through bone fragments. Due to the minimally invasive drilling procedures without a view of risk structures like vessels and nerves, a thorough training of young surgeons is necessary. For the development of a virtual reality (VR) based training system, a three-dimensional (3D) printed phantom hand is required. To ensure an intuitive operation, this phantom hand has to be realistic in both, its position relative to the driller as well as in its haptic features. The softest 3D printing material available on the market, however, is too hard to imitate human soft tissue. Therefore, a support-material (SUP) filled metamaterial is used to soften the raw material. Realistic haptic features are important to palpate protrusions of the bone to determine the drilling starting point and angle. An optical real-time tracking is used to transfer position and rotation to the training system. METHODS: A metamaterial already developed in previous work is further improved by use of a new unit cell. Thus, the amount of SUP within the volume can be increased and the tissue is softened further. In addition, the human anatomy is transferred to the entire hand model. A subcutaneous fat layer and penetration of air through pores into the volume simulate shiftability of skin layers. For optical tracking, a rotationally symmetrical marker attached to the phantom hand with corresponding reference marker is developed. In order to ensure trouble-free position transmission, various types of marker point applications are tested. RESULTS: Several cuboid and forearm sample prints lead to a final 30 centimeter long hand model. The whole haptic phantom could be printed faultless within about 17 hours. The metamaterial consisting of the new unit cell results in an increased SUP share of 4.32%. Validated by an expert surgeon study, this allows in combination with a displacement of the uppermost skin layer a good palpability of the bones. Tracking of the hand marker in dodecahedron design works trouble-free in conjunction with a reference marker attached to the worktop of the training system. CONCLUSIONS: In this work, an optically tracked and haptically correct phantom hand was developed using dual-material 3D printing, which can be easily integrated into a surgical training system.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA