Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Comput Methods Programs Biomed ; 252: 108215, 2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38781811

RESUMO

BACKGROUND AND OBJECTIVE: Cell segmentation in bright-field histological slides is a crucial topic in medical image analysis. Having access to accurate segmentation allows researchers to examine the relationship between cellular morphology and clinical observations. Unfortunately, most segmentation methods known today are limited to nuclei and cannot segment the cytoplasm. METHODS: We present a new network architecture Cyto R-CNN that is able to accurately segment whole cells (with both the nucleus and the cytoplasm) in bright-field images. We also present a new dataset CytoNuke, consisting of multiple thousand manual annotations of head and neck squamous cell carcinoma cells. Utilizing this dataset, we compared the performance of Cyto R-CNN to other popular cell segmentation algorithms, including QuPath's built-in algorithm, StarDist, Cellpose and a multi-scale Attention Deeplabv3+. To evaluate segmentation performance, we calculated AP50, AP75 and measured 17 morphological and staining-related features for all detected cells. We compared these measurements to the gold standard of manual segmentation using the Kolmogorov-Smirnov test. RESULTS: Cyto R-CNN achieved an AP50 of 58.65% and an AP75 of 11.56% in whole-cell segmentation, outperforming all other methods (QuPath 19.46/0.91%; StarDist 45.33/2.32%; Cellpose 31.85/5.61%, Deeplabv3+ 3.97/1.01%). Cell features derived from Cyto R-CNN showed the best agreement to the gold standard (D¯=0.15) outperforming QuPath (D¯=0.22), StarDist (D¯=0.25), Cellpose (D¯=0.23) and Deeplabv3+ (D¯=0.33). CONCLUSION: Our newly proposed Cyto R-CNN architecture outperforms current algorithms in whole-cell segmentation while providing more reliable cell measurements than any other model. This could improve digital pathology workflows, potentially leading to improved diagnosis. Moreover, our published dataset can be used to develop further models in the future.

2.
J Med Syst ; 48(1): 55, 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38780820

RESUMO

Designing implants for large and complex cranial defects is a challenging task, even for professional designers. Current efforts on automating the design process focused mainly on convolutional neural networks (CNN), which have produced state-of-the-art results on reconstructing synthetic defects. However, existing CNN-based methods have been difficult to translate to clinical practice in cranioplasty, as their performance on large and complex cranial defects remains unsatisfactory. In this paper, we present a statistical shape model (SSM) built directly on the segmentation masks of the skulls represented as binary voxel occupancy grids and evaluate it on several cranial implant design datasets. Results show that, while CNN-based approaches outperform the SSM on synthetic defects, they are inferior to SSM when it comes to large, complex and real-world defects. Experienced neurosurgeons evaluate the implants generated by the SSM to be feasible for clinical use after minor manual corrections. Datasets and the SSM model are publicly available at https://github.com/Jianningli/ssm .


Assuntos
Redes Neurais de Computação , Crânio , Humanos , Crânio/cirurgia , Crânio/anatomia & histologia , Crânio/diagnóstico por imagem , Modelos Estatísticos , Processamento de Imagem Assistida por Computador/métodos , Procedimentos de Cirurgia Plástica/métodos , Próteses e Implantes
3.
Med Image Anal ; 95: 103199, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38759258

RESUMO

The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), positive predictive value (PPV) and F1-score.


Assuntos
Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/classificação , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos
4.
BMC Med Educ ; 24(1): 250, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38500112

RESUMO

OBJECTIVE: The gold standard of oral cancer (OC) treatment is diagnostic confirmation by biopsy followed by surgical treatment. However, studies have shown that dentists have difficulty performing biopsies, dental students lack knowledge about OC, and surgeons do not always maintain a safe margin during tumor resection. To address this, biopsies and resections could be trained under realistic conditions outside the patient. The aim of this study was to develop and to validate a porcine pseudotumor model of the tongue. METHODS: An interdisciplinary team reflecting various specialties involved in the oncological treatment of head and neck oncology developed a porcine pseudotumor model of the tongue in which biopsies and resections can be practiced. The refined model was validated in a final trial of 10 participants who each resected four pseudotumors on a tongue, resulting in a total of 40 resected pseudotumors. The participants (7 residents and 3 specialists) had an experience in OC treatment ranging from 0.5 to 27 years. Resection margins (minimum and maximum) were assessed macroscopically and compared beside self-assessed margins and resection time between residents and specialists. Furthermore, the model was evaluated using Likert-type questions on haptic and radiological fidelity, its usefulness as a training model, as well as its imageability using CT and ultrasound. RESULTS: The model haptically resembles OC (3.0 ± 0.5; 4-point Likert scale), can be visualized with medical imaging and macroscopically evaluated immediately after resection providing feedback. Although, participants (3.2 ± 0.4) tended to agree that they had resected the pseudotumor with an ideal safety margin (10 mm), the mean minimum resection margin was insufficient at 4.2 ± 1.2 mm (mean ± SD), comparable to reported margins in literature. Simultaneously, a maximum resection margin of 18.4 ± 6.1 mm was measured, indicating partial over-resection. Although specialists were faster at resection (p < 0.001), this had no effect on margins (p = 0.114). Overall, the model was well received by the participants, and they could see it being implemented in training (3.7 ± 0.5). CONCLUSION: The model, which is cost-effective, cryopreservable, and provides a risk-free training environment, is ideal for training in OC biopsy and resection and could be incorporated into dental, medical, or oncologic surgery curricula. Future studies should evaluate the long-term training effects using this model and its potential impact on improving patient outcomes.


Assuntos
Margens de Excisão , Neoplasias Bucais , Animais , Humanos , Biópsia , Cadáver , Cabeça , Neoplasias Bucais/cirurgia , Neoplasias Bucais/patologia , Suínos
5.
Artigo em Inglês | MEDLINE | ID: mdl-38526613

RESUMO

PURPOSE: Efficient and precise surgical skills are essential in ensuring positive patient outcomes. By continuously providing real-time, data driven, and objective evaluation of surgical performance, automated skill assessment has the potential to greatly improve surgical skill training. Whereas machine learning-based surgical skill assessment is gaining traction for minimally invasive techniques, this cannot be said for open surgery skills. Open surgery generally has more degrees of freedom when compared to minimally invasive surgery, making it more difficult to interpret. In this paper, we present novel approaches for skill assessment for open surgery skills. METHODS: We analyzed a novel video dataset for open suturing training. We provide a detailed analysis of the dataset and define evaluation guidelines, using state of the art deep learning models. Furthermore, we present novel benchmarking results for surgical skill assessment in open suturing. The models are trained to classify a video into three skill levels based on the global rating score. To obtain initial results for video-based surgical skill classification, we benchmarked a temporal segment network with both an I3D and a Video Swin backbone on this dataset. RESULTS: The dataset is composed of 314 videos of approximately five minutes each. Model benchmarking results are an accuracy and F1 score of up to 75 and 72%, respectively. This is similar to the performance achieved by the individual raters, regarding inter-rater agreement and rater variability. We present the first end-to-end trained approach for skill assessment for open surgery training. CONCLUSION: We provide a thorough analysis of a new dataset as well as novel benchmarking results for surgical skill assessment. This opens the doors to new advances in skill assessment by enabling video-based skill assessment for classic surgical techniques with the potential to improve the surgical outcome of patients.

6.
Sci Data ; 10(1): 796, 2023 11 11.
Artigo em Inglês | MEDLINE | ID: mdl-37951957

RESUMO

The availability of computational hardware and developments in (medical) machine learning (MML) increases medical mixed realities' (MMR) clinical usability. Medical instruments have played a vital role in surgery for ages. To further accelerate the implementation of MML and MMR, three-dimensional (3D) datasets of instruments should be publicly available. The proposed data collection consists of 103, 3D-scanned medical instruments from the clinical routine, scanned with structured light scanners. The collection consists, for example, of instruments, like retractors, forceps, and clamps. The collection can be augmented by generating likewise models using 3D software, resulting in an inflated dataset for analysis. The collection can be used for general instrument detection and tracking in operating room settings, or a freeform marker-less instrument registration for tool tracking in augmented reality. Furthermore, for medical simulation or training scenarios in virtual reality and medical diminishing reality in mixed reality. We hope to ease research in the field of MMR and MML, but also to motivate the release of a wider variety of needed surgical instrument datasets.


Assuntos
Imageamento Tridimensional , Instrumentos Cirúrgicos , Realidade Virtual , Simulação por Computador , Software
7.
BMC Med Imaging ; 23(1): 174, 2023 10 31.
Artigo em Inglês | MEDLINE | ID: mdl-37907876

RESUMO

BACKGROUND: With the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets. METHODS: Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary. RESULTS: The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. CONCLUSIONS: We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.


Assuntos
Glioblastoma , Redes Neurais de Computação , Humanos , Imageamento por Ressonância Magnética , Encéfalo , Glioblastoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
8.
JCO Clin Cancer Inform ; 7: e2300038, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37527475

RESUMO

PURPOSE: Quantifying treatment response to gastroesophageal junction (GEJ) adenocarcinomas is crucial to provide an optimal therapeutic strategy. Routinely taken tissue samples provide an opportunity to enhance existing positron emission tomography-computed tomography (PET/CT)-based therapy response evaluation. Our objective was to investigate if deep learning (DL) algorithms are capable of predicting the therapy response of patients with GEJ adenocarcinoma to neoadjuvant chemotherapy on the basis of histologic tissue samples. METHODS: This diagnostic study recruited 67 patients with I-III GEJ adenocarcinoma from the multicentric nonrandomized MEMORI trial including three German university hospitals TUM (University Hospital Rechts der Isar, Munich), LMU (Hospital of the Ludwig-Maximilians-University, Munich), and UME (University Hospital Essen, Essen). All patients underwent baseline PET/CT scans and esophageal biopsy before and 14-21 days after treatment initiation. Treatment response was defined as a ≥35% decrease in SUVmax from baseline. Several DL algorithms were developed to predict PET/CT-based responders and nonresponders to neoadjuvant chemotherapy using digitized histopathologic whole slide images (WSIs). RESULTS: The resulting models were trained on TUM (n = 25 pretherapy, n = 47 on-therapy) patients and evaluated on our internal validation cohort from LMU and UME (n = 17 pretherapy, n = 15 on-therapy). Compared with multiple architectures, the best pretherapy network achieves an area under the receiver operating characteristic curve (AUROC) of 0.81 (95% CI, 0.61 to 1.00), an area under the precision-recall curve (AUPRC) of 0.82 (95% CI, 0.61 to 1.00), a balanced accuracy of 0.78 (95% CI, 0.60 to 0.94), and a Matthews correlation coefficient (MCC) of 0.55 (95% CI, 0.18 to 0.88). The best on-therapy network achieves an AUROC of 0.84 (95% CI, 0.64 to 1.00), an AUPRC of 0.82 (95% CI, 0.56 to 1.00), a balanced accuracy of 0.80 (95% CI, 0.65 to 1.00), and a MCC of 0.71 (95% CI, 0.38 to 1.00). CONCLUSION: Our results show that DL algorithms can predict treatment response to neoadjuvant chemotherapy using WSI with high accuracy even before therapy initiation, suggesting the presence of predictive morphologic tissue biomarkers.


Assuntos
Adenocarcinoma , Aprendizado Profundo , Humanos , Terapia Neoadjuvante , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Adenocarcinoma/patologia , Junção Esofagogástrica/patologia
9.
Comput Med Imaging Graph ; 107: 102238, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37207396

RESUMO

The segmentation of histopathological whole slide images into tumourous and non-tumourous types of tissue is a challenging task that requires the consideration of both local and global spatial contexts to classify tumourous regions precisely. The identification of subtypes of tumour tissue complicates the issue as the sharpness of separation decreases and the pathologist's reasoning is even more guided by spatial context. However, the identification of detailed tissue types is crucial for providing personalized cancer therapies. Due to the high resolution of whole slide images, existing semantic segmentation methods, restricted to isolated image sections, are incapable of processing context information beyond. To take a step towards better context comprehension, we propose a patch neighbour attention mechanism to query the neighbouring tissue context from a patch embedding memory bank and infuse context embeddings into bottleneck hidden feature maps. Our memory attention framework (MAF) mimics a pathologist's annotation procedure - zooming out and considering surrounding tissue context. The framework can be integrated into any encoder-decoder segmentation method. We evaluate the MAF on two public breast cancer and liver cancer data sets and an internal kidney cancer data set using famous segmentation models (U-Net, DeeplabV3) and demonstrate the superiority over other context-integrating algorithms - achieving a substantial improvement of up to 17% on Dice score. The code is publicly available at https://github.com/tio-ikim/valuing-vicinity.


Assuntos
Neoplasias Renais , Neoplasias Hepáticas , Humanos , Semântica , Algoritmos , Processamento de Imagem Assistida por Computador
10.
Eur J Nucl Med Mol Imaging ; 50(7): 2196-2209, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36859618

RESUMO

PURPOSE: The aim of this study was to systematically evaluate the effect of thresholding algorithms used in computer vision for the quantification of prostate-specific membrane antigen positron emission tomography (PET) derived tumor volume (PSMA-TV) in patients with advanced prostate cancer. The results were validated with respect to the prognostication of overall survival in patients with advanced-stage prostate cancer. MATERIALS AND METHODS: A total of 78 patients who underwent [177Lu]Lu-PSMA-617 radionuclide therapy from January 2018 to December 2020 were retrospectively included in this study. [68Ga]Ga-PSMA-11 PET images, acquired prior to radionuclide therapy, were used for the analysis of thresholding algorithms. All PET images were first analyzed semi-automatically using a pre-evaluated, proprietary software solution as the baseline method. Subsequently, five histogram-based thresholding methods and two local adaptive thresholding methods that are well established in computer vision were applied to quantify molecular tumor volume. The resulting whole-body molecular tumor volumes were validated with respect to the prognostication of overall patient survival as well as their statistical correlation to the baseline methods and their performance on standardized phantom scans. RESULTS: The whole-body PSMA-TVs, quantified using different thresholding methods, demonstrate a high positive correlation with the baseline methods. We observed the highest correlation with generalized histogram thresholding (GHT) (Pearson r (r), p value (p): r = 0.977, p < 0.001) and Sauvola thresholding (r = 0.974, p < 0.001) and the lowest correlation with Multiotsu (r = 0.877, p < 0.001) and Yen thresholding methods (r = 0.878, p < 0.001). The median survival time of all patients was 9.87 months (95% CI [9.3 to 10.13]). Stratification by median whole-body PSMA-TV resulted in a median survival time from 11.8 to 13.5 months for the patient group with lower tumor burden and 6.5 to 6.6 months for the patient group with higher tumor burden. The patient group with lower tumor burden had significantly higher probability of survival (p < 0.00625) in eight out of nine thresholding methods (Fig. 2); those methods were SUVmax50 (p = 0.0038), SUV ≥3 (p = 0.0034), Multiotsu (p = 0.0015), Yen (p = 0.0015), Niblack (p = 0.001), Sauvola (p = 0.0001), Otsu (p = 0.0053), and Li thresholding (p = 0.0053). CONCLUSION: Thresholding methods commonly used in computer vision are promising tools for the semiautomatic quantification of whole-body PSMA-TV in [68Ga]Ga-PSMA-11-PET. The proposed algorithm-driven thresholding strategy is less arbitrary and less prone to biases than thresholding with predefined values, potentially improving the application of whole-body PSMA-TV as an imaging biomarker.


Assuntos
Neoplasias de Próstata Resistentes à Castração , Neoplasias da Próstata , Humanos , Masculino , Radioisótopos de Gálio , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Tomografia por Emissão de Pósitrons , Antígeno Prostático Específico , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Neoplasias da Próstata/patologia , Neoplasias de Próstata Resistentes à Castração/patologia , Estudos Retrospectivos , Carga Tumoral
11.
Med Image Anal ; 85: 102757, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36706637

RESUMO

The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.


Assuntos
Realidade Aumentada , Humanos , Reprodutibilidade dos Testes
12.
Diagnostics (Basel) ; 12(11)2022 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-36359576

RESUMO

Head and neck cancer has great regional anatomical complexity, as it can develop in different structures, exhibiting diverse tumour manifestations and high intratumoural heterogeneity, which is highly related to resistance to treatment, progression, the appearance of metastases, and tumour recurrences. Radiomics has the potential to address these obstacles by extracting quantitative, measurable, and extractable features from the region of interest in medical images. Medical imaging is a common source of information in clinical practice, presenting a potential alternative to biopsy, as it allows the extraction of a large number of features that, although not visible to the naked eye, may be relevant for tumour characterisation. Taking advantage of machine learning techniques, the set of features extracted when associated with biological parameters can be used for diagnosis, prognosis, and predictive accuracy valuable for clinical decision-making. Therefore, the main goal of this contribution was to determine to what extent the features extracted from Computed Tomography (CT) are related to cancer prognosis, namely Locoregional Recurrences (LRs), the development of Distant Metastases (DMs), and Overall Survival (OS). Through the set of tumour characteristics, predictive models were developed using machine learning techniques. The tumour was described by radiomic features, extracted from images, and by the clinical data of the patient. The performance of the models demonstrated that the most successful algorithm was XGBoost, and the inclusion of the patients' clinical data was an asset for cancer prognosis. Under these conditions, models were created that can reliably predict the LR, DM, and OS status, with the area under the ROC curve (AUC) values equal to 0.74, 0.84, and 0.91, respectively. In summary, the promising results obtained show the potential of radiomics, once the considered cancer prognosis can, in fact, be expressed through CT scans.

13.
Eur Radiol ; 32(12): 8769-8776, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35788757

RESUMO

OBJECTIVES: Over the course of their treatment, patients often switch hospitals, requiring staff at the new hospital to import external imaging studies to their local database. In this study, the authors present MOdality Mapping and Orchestration (MOMO), a Deep Learning-based approach to automate this mapping process by combining metadata analysis and a neural network ensemble. METHODS: A set of 11,934 imaging series with existing anatomical labels was retrieved from the PACS database of the local hospital to train an ensemble of neural networks (DenseNet-161 and ResNet-152), which process radiological images and predict the type of study they belong to. We developed an algorithm that automatically extracts relevant metadata from imaging studies, regardless of their structure, and combines it with the neural network ensemble, forming a powerful classifier. A set of 843 anonymized external studies from 321 hospitals was hand-labeled to assess performance. We tested several variations of this algorithm. RESULTS: MOMO achieves 92.71% accuracy and 2.63% minor errors (at 99.29% predictive power) on the external study classification task, outperforming both a commercial product (82.86% accuracy, 1.36% minor errors, 96.20% predictive power) and a pure neural network ensemble (72.69% accuracy, 10.3% minor errors, 99.05% predictive power) performing the same task. We find that the highest performance is achieved by an algorithm that combines all information into one vote-based classifier. CONCLUSION: Deep Learning combined with metadata matching is a promising and flexible approach for the automated classification of external DICOM studies for PACS archiving. KEY POINTS: • The algorithm can successfully identify 76 medical study types across seven modalities (CT, X-ray angiography, radiographs, MRI, PET (+CT/MRI), ultrasound, and mammograms). • The algorithm outperforms a commercial product performing the same task by a significant margin (> 9% accuracy gain). • The performance of the algorithm increases through the application of Deep Learning techniques.


Assuntos
Aprendizado Profundo , Humanos , Redes Neurais de Computação , Algoritmos , Bases de Dados Factuais , Imageamento por Ressonância Magnética/métodos
14.
Phys Med Biol ; 67(17)2022 08 18.
Artigo em Inglês | MEDLINE | ID: mdl-35878613

RESUMO

Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.


Assuntos
Neoplasias de Cabeça e Pescoço , Processamento de Imagem Assistida por Computador , Inteligência Artificial , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/cirurgia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
15.
Cancers (Basel) ; 13(16)2021 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-34439092

RESUMO

OBJECTIVE: This study aimed to investigate the effect of certain pre-operative parameters directly on the post-operative intensive care unit (ICU)-length of stay (LOS), in order to identify at-risk patients that are expected to need prolonged intensive care management post-operatively. MATERIAL AND METHODS: Retrospectively, patients managed in an ICU after undergoing major oral and maxillofacial surgery were analyzed. Inclusion criteria entailed: age 18-90 years, major primary oral cancer surgery including tumor resection, neck dissection and microvascular free flap reconstruction, minimum operation time of 8 h. Exclusion criteria were: benign/borderline tumors, primary radiation, other defect reconstruction than microvascular, treatment at other centers. Separate parameters used within the clinical routine were set in correlation with ICU-LOS, by applying single testing calculations (t-tests, variance analysis, correlation coefficients, effect sizes) and a valid univariate linear regression model. The primary outcome of interest was ICU-LOS. RESULTS: This study included a homogenous cohort of 122 patients. Mean surgery time was 11.4 (±2.2) h, mean ICU-LOS was 3.6 (±2.6) days. Patients with pre-operative renal dysfunction (p < 0.001), peripheral vascular disease-PVD (p = 0.01), increasing heart failure-NYHA stage categories (p = 0.009) and higher-grade categories of post-operative complications (p = 0.023) were identified as at-risk patients for a significantly prolonged post-operative ICU-LOS. CONCLUSIONS: At-risk patients are prone to need a significantly longer ICU-LOS than others. These patients are those with pre-operative severe renal dysfunction, PVD and/or high NYHA stage categories. Confounding parameters that contribute to a prolonged ICU-LOS in combination with other variables were identified as higher age, prolonged operative time, chronic obstructive pulmonary disease, and intra-operatively transfused blood.

16.
Sci Data ; 8(1): 36, 2021 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-33514740

RESUMO

Patient-specific craniofacial implants are used to repair skull bone defects after trauma or surgery. Currently, cranial implants are designed and produced by third-party suppliers, which is usually time-consuming and expensive. Recent advances in additive manufacturing made the in-hospital or in-operation-room fabrication of personalized implants feasible. However, the implants are still manufactured by external companies. To facilitate an optimized workflow, fast and automatic implant manufacturing is highly desirable. Data-driven approaches, such as deep learning, show currently great potential towards automatic implant design. However, a considerable amount of data is needed to train such algorithms, which is, especially in the medical domain, often a bottleneck. Therefore, we present CT-imaging data of the craniofacial complex from 24 patients, in which we injected various artificial cranial defects, resulting in 240 data pairs and 240 corresponding implants. Based on this work, automatic implant design and manufacturing processes can be trained. Additionally, the data of this work build a solid base for researchers to work on automatic cranial implant designs.


Assuntos
Próteses e Implantes , Desenho de Prótese , Crânio/anatomia & histologia , Crânio/patologia , Algoritmos , Desenho Assistido por Computador , Humanos , Imageamento Tridimensional , Crânio/diagnóstico por imagem , Tomografia Computadorizada por Raios X
17.
Comput Methods Programs Biomed ; 200: 105854, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33261944

RESUMO

BACKGROUND AND OBJECTIVE: Augmented reality (AR) can help to overcome current limitations in computer assisted head and neck surgery by granting "X-ray vision" to physicians. Still, the acceptance of AR in clinical applications is limited by technical and clinical challenges. We aim to demonstrate the benefit of a marker-free, instant calibration AR system for head and neck cancer imaging, which we hypothesize to be acceptable and practical for clinical use. METHODS: We implemented a novel AR system for visualization of medical image data registered with the head or face of the patient prior to intervention. Our system allows the localization of head and neck carcinoma in relation to the outer anatomy. Our system does not require markers or stationary infrastructure, provides instant calibration and allows 2D and 3D multi-modal visualization for head and neck surgery planning via an AR head-mounted display. We evaluated our system in a pre-clinical user study with eleven medical experts. RESULTS: Medical experts rated our application with a system usability scale score of 74.8 ± 15.9, which signifies above average, good usability and clinical acceptance. An average of 12.7 ± 6.6 minutes of training time was needed by physicians, before they were able to navigate the application without assistance. CONCLUSIONS: Our AR system is characterized by a slim and easy setup, short training time and high usability and acceptance. Therefore, it presents a promising, novel tool for visualizing head and neck cancer imaging and pre-surgical localization of target structures.


Assuntos
Realidade Aumentada , Neoplasias de Cabeça e Pescoço , Cirurgia Assistida por Computador , Calibragem , Estudos de Viabilidade , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Imageamento Tridimensional
18.
Expert Rev Med Devices ; 18(1): 47-62, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33283563

RESUMO

Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Realidade Virtual , Humanos , Procedimentos Cirúrgicos Operatórios , Percepção do Tato , Percepção Visual
19.
Sci Data ; 6(1): 310, 2019 12 09.
Artigo em Inglês | MEDLINE | ID: mdl-31819060

RESUMO

Medical augmented reality (AR) is an increasingly important topic in many medical fields. AR enables x-ray vision to see through real world objects. In medicine, this offers pre-, intra- or post-interventional visualization of "hidden" structures. In contrast to a classical monitor view, AR applications provide visualization not only on but also in relation to the patient. However, research and development of medical AR applications is challenging, because of unique patient-specific anatomies and pathologies. Working with several patients during the development for weeks or even months is not feasible. One alternative are commercial patient phantoms, which are very expensive. Hence, this data set provides a unique collection of head and neck cancer patient PET-CT scans with corresponding 3D models, provided as stereolitography (STL) files. The 3D models are optimized for effective 3D printing at low cost. This data can be used in the development and evaluation of AR applications for head and neck surgery.


Assuntos
Realidade Aumentada , Face/anatomia & histologia , Modelos Anatômicos , Cirurgia Bucal , Neoplasias de Cabeça e Pescoço , Humanos , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Impressão Tridimensional
20.
Comput Methods Programs Biomed ; 182: 105102, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31610359

RESUMO

BACKGROUND AND OBJECTIVES: Computer-assisted technologies, such as image-based segmentation, play an important role in the diagnosis and treatment support in cranio-maxillofacial surgery. However, although many segmentation software packages exist, their clinical in-house use is often challenging due to constrained technical, human or financial resources. Especially technological solutions or systematic evaluations of open-source based segmentation approaches are lacking. The aim of this contribution is to assess and review the segmentation quality and the potential clinical use of multiple commonly available and license-free segmentation methods on different medical platforms. METHODS: In this contribution, the quality and accuracy of open-source segmentation methods was assessed on different platforms using patient-specific clinical CT-data and reviewed with the literature. The image-based segmentation algorithms GrowCut, Robust Statistics Segmenter, Region Growing 3D, Otsu & Picking, Canny Segmentation and Geodesic Segmenter were investigated in the mandible on the platforms 3D Slicer, MITK and MeVisLab. Comparisons were made between the segmentation algorithms and the ground truth segmentations of the same anatomy performed by two clinical experts (n = 20). Assessment parameters were the Dice Score Coefficient (DSC), the Hausdorff Distance (HD), and Pearsons correlation coefficient (r). RESULTS: The segmentation accuracy was highest with the GrowCut (DSC 85.6%, HD 33.5 voxel) and the Canny (DSC 82.1%, HD 8.5 voxel) algorithm. Statistical differences between the assessment parameters were not significant (p < 0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the segmentation methods and the ground truth schemes. Functionally stable and time-saving segmentations were observed. CONCLUSION: High quality image-based semi-automatic segmentation was provided by the GrowCut and the Canny segmentation method. In the cranio-maxillofacial complex, these segmentation methods provide algorithmic alternatives for image-based segmentation in the clinical practice for e.g. surgical planning or visualization of treatment results and offer advantages through their open-source availability. This is the first systematic multi-platform comparison that evaluates multiple license-free, open-source segmentation methods based on clinical data for the improvement of algorithms and a potential clinical use in patient-individualized medicine. The results presented are reproducible by others and can be used for clinical and research purposes.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Crânio/diagnóstico por imagem , Crânio/cirurgia , Cirurgia Bucal/métodos , Algoritmos , Automação , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA