Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 48
Filter
Add more filters

Publication year range
1.
Endoscopy ; 2024 May 02.
Article in English | MEDLINE | ID: mdl-38547927

ABSTRACT

BACKGROUND: This study evaluated the effect of an artificial intelligence (AI)-based clinical decision support system on the performance and diagnostic confidence of endoscopists in their assessment of Barrett's esophagus (BE). METHODS: 96 standardized endoscopy videos were assessed by 22 endoscopists with varying degrees of BE experience from 12 centers. Assessment was randomized into two video sets: group A (review first without AI and second with AI) and group B (review first with AI and second without AI). Endoscopists were required to evaluate each video for the presence of Barrett's esophagus-related neoplasia (BERN) and then decide on a spot for a targeted biopsy. After the second assessment, they were allowed to change their clinical decision and confidence level. RESULTS: AI had a stand-alone sensitivity, specificity, and accuracy of 92.2%, 68.9%, and 81.3%, respectively. Without AI, BE experts had an overall sensitivity, specificity, and accuracy of 83.3%, 58.1%, and 71.5%, respectively. With AI, BE nonexperts showed a significant improvement in sensitivity and specificity when videos were assessed a second time with AI (sensitivity 69.8% [95%CI 65.2%-74.2%] to 78.0% [95%CI 74.0%-82.0%]; specificity 67.3% [95%CI 62.5%-72.2%] to 72.7% [95%CI 68.2%-77.3%]). In addition, the diagnostic confidence of BE nonexperts improved significantly with AI. CONCLUSION: BE nonexperts benefitted significantly from additional AI. BE experts and nonexperts remained significantly below the stand-alone performance of AI, suggesting that there may be other factors influencing endoscopists' decisions to follow or discard AI advice.

2.
Gastrointest Endosc ; 97(5): 911-916, 2023 05.
Article in English | MEDLINE | ID: mdl-36646146

ABSTRACT

BACKGROUND AND AIMS: Celiac disease with its endoscopic manifestation of villous atrophy (VA) is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of VA at routine EGD may improve diagnostic performance. METHODS: A dataset of 858 endoscopic images of 182 patients with VA and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet18 deep learning model to detect VA. An external dataset was used to test the algorithm, in addition to 6 fellows and 4 board-certified gastroenterologists. Fellows could consult the AI algorithm's result during the test. From their consultation distribution, a stratification of test images into "easy" and "difficult" was performed and used for classified performance measurement. RESULTS: External validation of the AI algorithm yielded values of 90%, 76%, and 84% for sensitivity, specificity, and accuracy, respectively. Fellows scored corresponding values of 63%, 72%, and 67% and experts scored 72%, 69%, and 71%, respectively. AI consultation significantly improved all trainee performance statistics. Although fellows and experts showed significantly lower performance for difficult images, the performance of the AI algorithm was stable. CONCLUSIONS: In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of VA on endoscopic still images. AI decision support significantly improved the performance of nonexpert endoscopists. The stable performance on difficult images suggests a further positive add-on effect in challenging cases.


Subject(s)
Artificial Intelligence , Deep Learning , Humans , Endoscopy, Gastrointestinal , Algorithms , Atrophy
3.
Gut ; 71(12): 2388-2390, 2022 12.
Article in English | MEDLINE | ID: mdl-36109151

ABSTRACT

In this study, we aimed to develop an artificial intelligence clinical decision support solution to mitigate operator-dependent limitations during complex endoscopic procedures such as endoscopic submucosal dissection and peroral endoscopic myotomy, for example, bleeding and perforation. A DeepLabv3-based model was trained to delineate vessels, tissue structures and instruments on endoscopic still images from such procedures. The mean cross-validated Intersection over Union and Dice Score were 63% and 76%, respectively. Applied to standardised video clips from third-space endoscopic procedures, the algorithm showed a mean vessel detection rate of 85% with a false-positive rate of 0.75/min. These performance statistics suggest a potential clinical benefit for procedure safety, time and also training.


Subject(s)
Deep Learning , Endoscopic Mucosal Resection , Humans , Artificial Intelligence , Endoscopy, Gastrointestinal
4.
Endoscopy ; 53(9): 878-883, 2021 09.
Article in English | MEDLINE | ID: mdl-33197942

ABSTRACT

BACKGROUND: The accurate differentiation between T1a and T1b Barrett's-related cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an artificial intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett's cancer on white-light images. METHODS: Endoscopic images from three tertiary care centers in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) were evaluated using the AI system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett's cancer. RESULTS: The sensitivity, specificity, F1 score, and accuracy of the AI system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.74, and 0.71, respectively. There was no statistically significant difference between the performance of the AI system and that of experts, who showed sensitivity, specificity, F1, and accuracy of 0.63, 0.78, 0.67, and 0.70, respectively. CONCLUSION: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett's cancer. AI scored equally to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and real-life settings. Nevertheless, the correct prediction of submucosal invasion in Barrett's cancer remains challenging for both experts and AI.


Subject(s)
Adenocarcinoma , Barrett Esophagus , Esophageal Neoplasms , Adenocarcinoma/diagnostic imaging , Artificial Intelligence , Barrett Esophagus/diagnostic imaging , Esophageal Neoplasms/diagnostic imaging , Esophagoscopy , Humans , Pilot Projects , Retrospective Studies
5.
Arch Gynecol Obstet ; 303(3): 721-728, 2021 03.
Article in English | MEDLINE | ID: mdl-33184690

ABSTRACT

PURPOSE: In this trial, we used a previously developed prototype software to assess aesthetic results after reconstructive surgery for congenital breast asymmetry using automated anthropometry. To prove the consensus between the manual and automatic digital measurements, we evaluated the software by comparing the manual and automatic measurements of 46 breasts. METHODS: Twenty-three patients who underwent reconstructive surgery for congenital breast asymmetry at our institution were examined and underwent 3D surface imaging. Per patient, 14 manual and 14 computer-based anthropometric measurements were obtained according to a standardized protocol. Manual and automatic measurements, as well as the previously proposed Symmetry Index (SI), were compared. RESULTS: The Wilcoxon signed-rank test revealed no significant differences in six of the seven measurements between the automatic and manual assessments. The SI showed robust agreement between the automatic and manual methods. CONCLUSION: The present trial validates our method for digital anthropometry. Despite the discrepancy in one measurement, all remaining measurements, including the SI, showed high agreement between the manual and automatic methods. The proposed data bring us one step closer to the long-term goal of establishing robust instruments to evaluate the results of breast surgery. LEVEL OF EVIDENCE: IV.


Subject(s)
Breast/anatomy & histology , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Adult , Anthropometry/methods , Esthetics , Female , Humans , Mastectomy , Reproducibility of Results , Software
6.
Gut ; 2020 Oct 30.
Article in English | MEDLINE | ID: mdl-33127833

ABSTRACT

OBJECTIVE: Artificial intelligence (AI) may reduce underdiagnosed or overlooked upper GI (UGI) neoplastic and preneoplastic conditions, due to subtle appearance and low disease prevalence. Only disease-specific AI performances have been reported, generating uncertainty on its clinical value. DESIGN: We searched PubMed, Embase and Scopus until July 2020, for studies on the diagnostic performance of AI in detection and characterisation of UGI lesions. Primary outcomes were pooled diagnostic accuracy, sensitivity and specificity of AI. Secondary outcomes were pooled positive (PPV) and negative (NPV) predictive values. We calculated pooled proportion rates (%), designed summary receiving operating characteristic curves with respective area under the curves (AUCs) and performed metaregression and sensitivity analysis. RESULTS: Overall, 19 studies on detection of oesophageal squamous cell neoplasia (ESCN) or Barrett's esophagus-related neoplasia (BERN) or gastric adenocarcinoma (GCA) were included with 218, 445, 453 patients and 7976, 2340, 13 562 images, respectively. AI-sensitivity/specificity/PPV/NPV/positive likelihood ratio/negative likelihood ratio for UGI neoplasia detection were 90% (CI 85% to 94%)/89% (CI 85% to 92%)/87% (CI 83% to 91%)/91% (CI 87% to 94%)/8.2 (CI 5.7 to 11.7)/0.111 (CI 0.071 to 0.175), respectively, with an overall AUC of 0.95 (CI 0.93 to 0.97). No difference in AI performance across ESCN, BERN and GCA was found, AUC being 0.94 (CI 0.52 to 0.99), 0.96 (CI 0.95 to 0.98), 0.93 (CI 0.83 to 0.99), respectively. Overall, study quality was low, with high risk of selection bias. No significant publication bias was found. CONCLUSION: We found a high overall AI accuracy for the diagnosis of any neoplastic lesion of the UGI tract that was independent of the underlying condition. This may be expected to substantially reduce the miss rate of precancerous lesions and early cancer when implemented in clinical practice.

7.
Aesthetic Plast Surg ; 44(6): 1980-1987, 2020 12.
Article in English | MEDLINE | ID: mdl-32405724

ABSTRACT

BACKGROUND: Breast reconstruction is an important coping tool for patients undergoing a mastectomy. There are numerous surgical techniques in breast reconstruction surgery (BRS). Regardless of the technique used, creating a symmetric outcome is crucial for patients and plastic surgeons. Three-dimensional surface imaging enables surgeons and patients to assess the outcome's symmetry in BRS. To discriminate between autologous and alloplastic techniques, we analyzed both techniques using objective optical computerized symmetry analysis. Software was developed that enables clinicians to assess optical breast symmetry using three-dimensional surface imaging. METHODS: Twenty-seven patients who had undergone autologous (n = 12) or alloplastic (n = 15) BRS received three-dimensional surface imaging. Anthropomorphic data were collected digitally using semiautomatic measurements and automatic measurements. Automatic measurements were taken using the newly developed software. To quantify symmetry, a Symmetry Index is proposed. RESULTS: Statistical analysis revealed that there is no difference in the outcome symmetry between the two groups (t test for independent samples; p = 0.48, two-tailed). CONCLUSION: This study's findings provide a foundation for qualitative symmetry assessment in BRS using automatized digital anthropometry. In the present trial, no difference in the outcomes' optical symmetry was detected between autologous and alloplastic approaches. Level of evidence Level IV. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .


Subject(s)
Breast Neoplasms , Mammaplasty , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/surgery , Cohort Studies , Esthetics , Humans , Mastectomy , Retrospective Studies , Risk Assessment , Treatment Outcome
8.
Ophthalmology ; 125(9): 1410-1420, 2018 09.
Article in English | MEDLINE | ID: mdl-29653860

ABSTRACT

PURPOSE: Age-related macular degeneration (AMD) is a common threat to vision. While classification of disease stages is critical to understanding disease risk and progression, several systems based on color fundus photographs are known. Most of these require in-depth and time-consuming analysis of fundus images. Herein, we present an automated computer-based classification algorithm. DESIGN: Algorithm development for AMD classification based on a large collection of color fundus images. Validation is performed on a cross-sectional, population-based study. PARTICIPANTS: We included 120 656 manually graded color fundus images from 3654 Age-Related Eye Disease Study (AREDS) participants. AREDS participants were >55 years of age, and non-AMD sight-threatening diseases were excluded at recruitment. In addition, performance of our algorithm was evaluated in 5555 fundus images from the population-based Kooperative Gesundheitsforschung in der Region Augsburg (KORA; Cooperative Health Research in the Region of Augsburg) study. METHODS: We defined 13 classes (9 AREDS steps, 3 late AMD stages, and 1 for ungradable images) and trained several convolution deep learning architectures. An ensemble of network architectures improved prediction accuracy. An independent dataset was used to evaluate the performance of our algorithm in a population-based study. MAIN OUTCOME MEASURES: κ Statistics and accuracy to evaluate the concordance between predicted and expert human grader classification. RESULTS: A network ensemble of 6 different neural net architectures predicted the 13 classes in the AREDS test set with a quadratic weighted κ of 92% (95% confidence interval, 89%-92%) and an overall accuracy of 63.3%. In the independent KORA dataset, images wrongly classified as AMD were mainly the result of a macular reflex observed in young individuals. By restricting the KORA analysis to individuals >55 years of age and prior exclusion of other retinopathies, the weighted and unweighted κ increased to 50% and 63%, respectively. Importantly, the algorithm detected 84.2% of all fundus images with definite signs of early or late AMD. Overall, 94.3% of healthy fundus images were classified correctly. CONCLUSIONS: Our deep learning algoritm revealed a weighted κ outperforming human graders in the AREDS study and is suitable to classify AMD fundus images in other datasets using individuals >55 years of age.


Subject(s)
Algorithms , Deep Learning , Diagnostic Techniques, Ophthalmological , Macula Lutea/pathology , Macular Degeneration/diagnosis , Aged , Cross-Sectional Studies , Female , Fundus Oculi , Humans , Male , Middle Aged , Photography , Reproducibility of Results , Severity of Illness Index
9.
BMC Musculoskelet Disord ; 19(1): 52, 2018 02 13.
Article in English | MEDLINE | ID: mdl-29439687

ABSTRACT

BACKROUND: Scaphoidectomy and midcarpal fusion can be performed using traditional fixation methods like K-wires, staples, screws or different dorsal (non)locking arthrodesis systems. The aim of this study is to test the Aptus four corner locking plate and to compare the clinical findings to the data revealed by CT scans and semi-automated segmentation. METHODS: This is a retrospective review of eleven patients suffering from scapholunate advanced collapse (SLAC) or scaphoid non-union advanced collapse (SNAC) wrist, who received a four corner fusion between August 2011 and July 2014. The clinical evaluation consisted of measuring the range of motion (ROM), strength and pain on a visual analogue scale (VAS). Additionally, the Disabilities of the Arm, Shoulder and Hand (QuickDASH) and the Mayo Wrist Score were assessed. A computerized tomography (CT) of the wrist was obtained six weeks postoperatively. After semi-automated segmentation of the CT scans, the models were post processed and surveyed. RESULTS: During the six-month follow-up mean range of motion (ROM) of the operated wrist was 60°, consisting of 30° extension and 30° flexion. While pain levels decreased significantly, 54% of grip strength and 89% of pinch strength were preserved compared to the contralateral healthy wrist. Union could be detected in all CT scans of the wrist. While X-ray pictures obtained postoperatively revealed no pathology, two user related technical complications were found through the 3D analysis, which correlated to the clinical outcome. CONCLUSION: Due to semi-automated segmentation and 3D analysis it has been proved that the plate design can keep up to the manufacturers' promises. Over all, this case series confirmed that the plate can compete with the coexisting techniques concerning clinical outcome, union and complication rate.


Subject(s)
Bone Plates , Fracture Fixation, Internal/methods , Imaging, Three-Dimensional/methods , Scaphoid Bone/diagnostic imaging , Scaphoid Bone/surgery , Tomography, X-Ray Computed/methods , Adult , Aged , Female , Humans , Male , Middle Aged , Retrospective Studies , Scaphoid Bone/injuries
14.
Comput Biol Med ; 169: 107929, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38184862

ABSTRACT

In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking", resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.


Subject(s)
Robotic Surgical Procedures , Surgery, Computer-Assisted , Endoscopy , Minimally Invasive Surgical Procedures , Robotic Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Surgical Instruments , Image Processing, Computer-Assisted/methods
15.
Med Biol Eng Comput ; 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38848031

ABSTRACT

Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. For this task, the deep learning techniques' black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett's esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett's esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem.

16.
JPRAS Open ; 39: 330-343, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38390355

ABSTRACT

Background: The utilization of three-dimensional (3D) surface imaging for facial anthropometry is a significant asset for patients undergoing maxillofacial surgery. Notably, there have been recent advancements in smartphone technology that enable 3D surface imaging.In this study, anthropometric assessments of the face were performed using a smartphone and a sophisticated 3D surface imaging system. Methods: 30 healthy volunteers (15 females and 15 males) were included in the study. An iPhone 14 Pro (Apple Inc., USA) using the application 3D Scanner App (Laan Consulting Corp., USA) and the Vectra M5 (Canfield Scientific, USA) were employed to create 3D surface models. For each participant, 19 anthropometric measurements were conducted on the 3D surface models. Subsequently, the anthropometric measurements generated by the two approaches were compared. The statistical techniques employed included the paired t-test, paired Wilcoxon signed-rank test, Bland-Altman analysis, and calculation of the intraclass correlation coefficient (ICC). Results: All measurements showed excellent agreement between smartphone-based and Vectra M5-based measurements (ICC between 0.85 and 0.97). Statistical analysis revealed no statistically significant differences in the central tendencies for 17 of the 19 linear measurements. Despite the excellent agreement found, Bland-Altman analysis revealed that the 95% limits of agreement between the two methods exceeded ±3 mm for the majority of measurements. Conclusion: Digital facial anthropometry using smartphones can serve as a valuable supplementary tool for surgeons, enhancing their communication with patients. However, the proposed data suggest that digital facial anthropometry using smartphones may not yet be suitable for certain diagnostic purposes that require high accuracy.

17.
Clin Hemorheol Microcirc ; 87(2): 221-235, 2024.
Article in English | MEDLINE | ID: mdl-38306026

ABSTRACT

BACKGROUND: Differentiation of high-flow from low-flow vascular malformations (VMs) is crucial for therapeutic management of this orphan disease. OBJECTIVE: A convolutional neural network (CNN) was evaluated for differentiation of peripheral vascular malformations (VMs) on T2-weighted short tau inversion recovery (STIR) MRI. METHODS: 527 MRIs (386 low-flow and 141 high-flow VMs) were randomly divided into training, validation and test set for this single-center study. 1) Results of the CNN's diagnostic performance were compared with that of two expert and four junior radiologists. 2) The influence of CNN's prediction on the radiologists' performance and diagnostic certainty was evaluated. 3) Junior radiologists' performance after self-training was compared with that of the CNN. RESULTS: Compared with the expert radiologists the CNN achieved similar accuracy (92% vs. 97%, p = 0.11), sensitivity (80% vs. 93%, p = 0.16) and specificity (97% vs. 100%, p = 0.50). In comparison to the junior radiologists, the CNN had a higher specificity and accuracy (97% vs. 80%, p < 0.001; 92% vs. 77%, p < 0.001). CNN assistance had no significant influence on their diagnostic performance and certainty. After self-training, the junior radiologists' specificity and accuracy improved and were comparable to that of the CNN. CONCLUSIONS: Diagnostic performance of the CNN for differentiating high-flow from low-flow VM was comparable to that of expert radiologists. CNN did not significantly improve the simulated daily practice of junior radiologists, self-training was more effective.


Subject(s)
Deep Learning , Magnetic Resonance Imaging , Vascular Malformations , Humans , Vascular Malformations/diagnostic imaging , Magnetic Resonance Imaging/methods , Female , Male , Adult , Middle Aged , Adolescent , Child , Aged , Child, Preschool
18.
Comput Biol Med ; 154: 106585, 2023 03.
Article in English | MEDLINE | ID: mdl-36731360

ABSTRACT

Semantic segmentation is an essential task in medical imaging research. Many powerful deep-learning-based approaches can be employed for this problem, but they are dependent on the availability of an expansive labeled dataset. In this work, we augment such supervised segmentation models to be suitable for learning from unlabeled data. Our semi-supervised approach, termed Error-Correcting Mean-Teacher, uses an exponential moving average model like the original Mean Teacher but introduces our new paradigm of error correction. The original segmentation network is augmented to handle this secondary correction task. Both tasks build upon the core feature extraction layers of the model. For the correction task, features detected in the input image are fused with features detected in the predicted segmentation and further processed with task-specific decoder layers. The combination of image and segmentation features allows the model to correct present mistakes in the given input pair. The correction task is trained jointly on the labeled data. On unlabeled data, the exponential moving average of the original network corrects the student's prediction. The combined outputs of the students' prediction with the teachers' correction form the basis for the semi-supervised update. We evaluate our method with the 2017 and 2018 Robotic Scene Segmentation data, the ISIC 2017 and the BraTS 2020 Challenges, a proprietary Endoscopic Submucosal Dissection dataset, Cityscapes, and Pascal VOC 2012. Additionally, we analyze the impact of the individual components and examine the behavior when the amount of labeled data varies, with experiments performed on two distinct segmentation architectures. Our method shows improvements in terms of the mean Intersection over Union over the supervised baseline and competing methods. Code is available at https://github.com/CloneRob/ECMT.


Subject(s)
Biomedical Research , Robotics , Humans , Semantics , Image Processing, Computer-Assisted
19.
Plast Reconstr Surg ; 152(4): 670e-674e, 2023 10 01.
Article in English | MEDLINE | ID: mdl-36952590

ABSTRACT

SUMMARY: Digital-nerve lesions result in a loss of tactile sensation reflected by an anesthetic area (AA) at the radial or ulnar aspect of the respective digit. Available tools to monitor the recovery of tactile sense have been criticized for their lack of validity. Precise quantification of AA dynamics by three-dimensional (3D) imaging could serve as an accurate surrogate to monitor recovery after digital-nerve repair. For validation, AAs were marked on digits of healthy volunteers to simulate the AA of an impaired cutaneous innervation. The 3D models were composed from raw images that had been acquired with a 3D camera to precisely quantify relative AA for each digit (3D models, n = 80). Operator properties varied with regard to individual experience in 3D imaging and image processing. In addition, the concept was applied in a clinical case study. Results showed that images taken by experienced photographers were rated as better quality ( P < 0.001) and needed less processing time ( P = 0.020). Quantification of the relative AA was not altered significantly, regardless of experience level of the photographer ( P = 0.425) or image assembler ( P = 0.749). The proposed concept allows precise and reliable surface quantification of digits and can be performed consistently without relevant distortion by lack of examiner experience. Routine 3D imaging of the AA has the great potential to provide visual evidence of various returning states of sensation and to convert sensory nerve recovery into a metric variable with high responsiveness to temporal progress.


Subject(s)
Sensation , Touch Perception , Humans , Touch , Image Processing, Computer-Assisted , Skin , Imaging, Three-Dimensional/methods
20.
J Clin Med ; 11(17)2022 Aug 25.
Article in English | MEDLINE | ID: mdl-36078928

ABSTRACT

BACKGROUND: Reliable, time- and cost-effective, and clinician-friendly diagnostic tools are cornerstones in facial palsy (FP) patient management. Different automated FP grading systems have been developed but revealed persisting downsides such as insufficient accuracy and cost-intensive hardware. We aimed to overcome these barriers and programmed an automated grading system for FP patients utilizing the House and Brackmann scale (HBS). METHODS: Image datasets of 86 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany, between June 2017 and May 2021, were used to train the neural network and evaluate its accuracy. Nine facial poses per patient were analyzed by the algorithm. RESULTS: The algorithm showed an accuracy of 100%. Oversampling did not result in altered outcomes, while the direct form displayed superior accuracy levels when compared to the modular classification form (n = 86; 100% vs. 99%). The Early Fusion technique was linked to improved accuracy outcomes in comparison to the Late Fusion and sequential method (n = 86; 100% vs. 96% vs. 97%). CONCLUSIONS: Our automated FP grading system combines high-level accuracy with cost- and time-effectiveness. Our algorithm may accelerate the grading process in FP patients and facilitate the FP surgeon's workflow.

SELECTION OF CITATIONS
SEARCH DETAIL