Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
Comput Biol Med ; 175: 108455, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38663350

ABSTRACT

The available reference data for the mandible and mandibular growth consists primarily of two-dimensional linear or angular measurements. The aim of this study was to create the first open-source, three-dimensional statistical shape model of the mandible that spans the complete growth period. Computed tomography scans of 678 mandibles from children and young adults between 0 and 22 years old were included in the model. The mandibles were segmented using a semi-automatic or automatic (artificial intelligence-based) segmentation method. Point correspondence among the samples was achieved by rigid registration, followed by non-rigid registration of a symmetrical template onto each sample. The registration process was validated with adequate results. Principal component analysis was used to gain insight in the variation within the dataset and to investigate age-related changes and sexual dimorphism. The presented growth model is accessible globally and free-of-charge for scientists, physicians and forensic investigators for any kind of purpose deemed suitable. The versatility of the model opens up new possibilities in the fields of oral and maxillofacial surgery, forensic sciences or biological anthropology. In clinical settings, the model may aid diagnostic decision-making, treatment planning and treatment evaluation.


Subject(s)
Imaging, Three-Dimensional , Mandible , Humans , Mandible/diagnostic imaging , Mandible/growth & development , Female , Male , Adolescent , Child , Child, Preschool , Infant , Imaging, Three-Dimensional/methods , Young Adult , Tomography, X-Ray Computed , Infant, Newborn , Adult , Models, Biological , Models, Anatomic
2.
Sci Rep ; 14(1): 6463, 2024 03 18.
Article in English | MEDLINE | ID: mdl-38499700

ABSTRACT

Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs. The automated landmarking workflow involved two successive DiffusionNet models. The dataset was randomly divided into a training and test dataset. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and a semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 ± 1.15 mm was comparable to the inter-observer variability (1.31 ± 0.91 mm) of manual annotation. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.


Subject(s)
Anatomic Landmarks , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Reproducibility of Results , Face/diagnostic imaging , Cephalometry/methods
3.
J Imaging ; 9(10)2023 Oct 16.
Article in English | MEDLINE | ID: mdl-37888333

ABSTRACT

Computer-assisted technologies have made significant progress in fetoscopic laser surgery, including placental vessel segmentation. However, the intra- and inter-procedure variabilities in the state-of-the-art segmentation methods remain a significant hurdle. To address this, we investigated the use of conditional generative adversarial networks (cGANs) for fetoscopic image segmentation and compared their performance with the benchmark U-Net technique for placental vessel segmentation. Two deep-learning models, U-Net and pix2pix (a popular cGAN model), were trained and evaluated using a publicly available dataset and an internal validation set. The overall results showed that the pix2pix model outperformed the U-Net model, with a Dice score of 0.80 [0.70; 0.86] versus 0.75 [0.0.60; 0.84] (p-value < 0.01) and an Intersection over Union (IoU) score of 0.70 [0.61; 0.77] compared to 0.66 [0.53; 0.75] (p-value < 0.01), respectively. The internal validation dataset further validated the superiority of the pix2pix model, achieving Dice and IoU scores of 0.68 [0.53; 0.79] and 0.59 [0.49; 0.69] (p-value < 0.01), respectively, while the U-Net model obtained scores of 0.53 [0.49; 0.64] and 0.49 [0.17; 0.56], respectively. This study successfully compared U-Net and pix2pix models for placental vessel segmentation in fetoscopic images, demonstrating improved results with the cGAN-based approach. However, the challenge of achieving generalizability still needs to be addressed.

5.
J Digit Imaging ; 36(4): 1930-1939, 2023 08.
Article in English | MEDLINE | ID: mdl-37162654

ABSTRACT

Although an increased usage and development of 3D technologies is observed in healthcare over the last decades, full integration of these technologies remains challenging. The goal of this project is to qualitatively explore challenges, pearls, and pitfalls of AR/VR/3D printing applications usage in the medical field of a university medical center. Two rounds of face-to-face interviews were conducted using a semi-structured protocol. First an explorative round was held, interviewing medical specialists (8), PhD students (7), 3D technology specialists (5), and university teachers (3). In the second round, twenty employees in high executive functions of relevant departments were interviewed on seven statements that resulted from the first interviewing round. Data analysis was performed using direct content analyses. The first interviewing round resulted in challenges and opportunities in 3D technology usage that were grouped in 5 themes: aims of using AR/VR/3D printing (1), data acquisition (2), data management plans (3), software packages and segmentation tools (4), and output data and reaching end-user (5). The second interviewing round resulted in an overview of ideas and insights on centralization of knowledge, improving implementation of 3D technology in daily healthcare, reimbursement of 3D technologies, recommendations for further studies, and requirement of using certified software. An overview of challenges and opportunities of 3D technologies in healthcare was provided. Well-designed studies on clinical effectiveness, implementation and cost-effectiveness are warranted for further implementation into the clinical setting.


Subject(s)
Augmented Reality , Virtual Reality , Humans , Expert Testimony , Software , Printing, Three-Dimensional
6.
Am J Otolaryngol ; 44(3): 103810, 2023.
Article in English | MEDLINE | ID: mdl-36871420

ABSTRACT

PURPOSE: In order to assess the severity and the progression of a unilateral peripheral facial palsy the Sunnybrook Facial Grading System (SFGS) is a well-established grading system due to its clinical relevance, sensitivity, and robust measuring method. However, training is required in order to achieve a high inter-rater reliability. This study investigated the automated grading of facial palsy patients based on the SFGS using a convolutional neural network. METHODS: A total of 116 patients with a unilateral peripheral facial palsy and 9 healthy subjects were recorded performing the Sunnybrook poses. A separate model was trained for each of the 13 elements of the SFGS and then used to calculate the Sunnybrook subscores and composite score. The performance of the automated grading system was compared to three clinicians experienced in the grading of a facial palsy. RESULTS: The inter-rater reliability of the convolutional neural network was within the range of human observers, with an average intra-class correlation coefficient of 0.87 for the composite Sunnybrook score, 0.45 for the resting symmetry subscore, 0.89 for the symmetry of voluntary movement subscore, and 0.77 for the synkinesis subscore. CONCLUSIONS: This study showed the potential of the automated SFGS to be implemented in a clinical setting. The automated grading system adhered to the original SFGS, which makes the implementation and interpretation of the automated grading more straightforward. The automated system can be implemented in numerous settings such as online consults in an e-Health environment, since the model used 2D images captured from a video recording.


Subject(s)
Bell Palsy , Deep Learning , Facial Paralysis , Humans , Facial Paralysis/diagnosis , Reproducibility of Results , Face
7.
Annu Rev Biomed Data Sci ; 5: 19-42, 2022 08 10.
Article in English | MEDLINE | ID: mdl-35440145

ABSTRACT

Deviation from a normal facial shape and symmetry can arise from numerous sources, including physical injury and congenital birth defects. Such abnormalities can have important aesthetic and functional consequences. Furthermore, in clinical genetics distinctive facial appearances are often associated with clinical or genetic diagnoses; the recognition of a characteristic facial appearance can substantially narrow the search space of potential diagnoses for the clinician. Unusual patterns of facial movement and expression can indicate disturbances to normal mechanical functioning or emotional affect. Computational analyses of static and moving 2D and 3D images can serve clinicians and researchers by detecting and describing facial structural, mechanical, and affective abnormalities objectively. In this review we survey traditional and emerging methods of facial analysis, including statistical shape modeling, syndrome classification, modeling clinical face phenotype spaces, and analysis of facial motion and affect.


Subject(s)
Imaging, Three-Dimensional , Esthetics , Facies , Humans , Imaging, Three-Dimensional/methods , Motion , Phenotype
8.
Brain Sci ; 12(3)2022 Mar 15.
Article in English | MEDLINE | ID: mdl-35326350

ABSTRACT

BACKGROUND: patients with a subarachnoid hemorrhage (SAH) might need a flow diverter (FD) placement for complex acutely ruptured intracranial aneurysms (IAs). We conducted a meta-analysis and developed a prediction model to estimate the favorable clinical outcome after the FD treatment in acutely ruptured IAs. METHODS: a systematic literature search was performed from 2010 to January 2021 in PubMed and Embase databases. Studies with more than five patients treated with FDs within fifteen days were included. In total, 1157 studies were identified. The primary outcome measure was the favorable clinical outcome (mRS 0-2). Secondary outcome measures were complete occlusion rates, aneurysm rebleeding, permanent neurologic deficit caused by procedure-related complications, and all-cause mortality. A prediction model was constructed using individual patient-level data. RESULTS: 26 retrospective studies with 357 patients and 368 aneurysms were included. The pooled rates of the favorable clinical outcome, mortality, and complete aneurysm occlusion were 73.7% (95% CI 64.7-81.0), 17.1% (95% CI 13.3-21.8), and 85.6% (95% CI 80.4-89.6), respectively. Rebleeding occurred in 3% of aneurysms (11/368). The c-statistic of the final model was 0.83 (95% CI 0.76-0.89). All the studies provided a very low quality of evidence. CONCLUSIONS: FD treatment can be considered for complex ruptured IAs. Despite high complication rates, the pooled clinical outcomes seem favorable. The prediction model needs to be validated by larger prospective studies before clinical application.

9.
Anat Sci Educ ; 15(5): 839-849, 2022 Aug.
Article in English | MEDLINE | ID: mdl-34218515

ABSTRACT

The use of augmented reality (AR) in teaching and studying neuroanatomy has been well researched. Previous research showed that AR-based learning of neuroanatomy has both alleviated cognitive load and was attractive to young learners. However, how the attractiveness of AR effects student motivation has not been discovered. Therefore, the motivational effects of AR were investigated in this research by the use of quantitative and qualitative methods. Motivation elicited by the GreyMapp-AR, an AR application, was investigated in medical and biomedical sciences students (n = 222; mean age: 19.7 ± 1.4 years) using the instructional measure of motivation survey (IMMS). Additional components (i.e., attention, relevance, confidence, and satisfaction) were also evaluated with motivation as measured by IMMS. Additionally, 19 students underwent audio-recorded individual interviews which were transcribed for qualitative analysis. Males regarded the relevance of AR significantly higher than females (P < 0.024). Appreciation of the GreyMapp-AR program was found to be significantly higher in students studying biomedical sciences as compared to students studying medicine (P < 0.011). Other components and scores did not show significant differences between student groups. Students expressed that AR was beneficial in increasing their motivation to study subcortical structures, and that AR could be helpful and motivating for preparing an anatomy examination. This study suggests that students are motivated to study neuroanatomy by the use of AR, although the components that make up their individual motivation can differ significantly between groups of students.


Subject(s)
Anatomy , Augmented Reality , Education, Medical, Undergraduate , Students, Medical , Adolescent , Adult , Anatomy/education , Education, Medical, Undergraduate/methods , Educational Measurement , Female , Humans , Male , Motivation , Neuroanatomy/education , Students/psychology , Students, Medical/psychology , Young Adult
10.
Sci Rep ; 11(1): 15292, 2021 07 27.
Article in English | MEDLINE | ID: mdl-34315955

ABSTRACT

The use of Augmented Reality (AR) in anatomical education has been promoted by numerous authors. Next to financial and ethical advantages, AR has been described to decrease cognitive load while increasing student motivation and engagement. Despite these advantages, the effects of AR on learning outcome varies in different studies and an overview and aggregated outcome on learning anatomy is lacking. Therefore, a meta-analysis on the effect of AR vs. traditional anatomical teaching methods on learning outcome was performed. Systematic database searches were conducted by two independent investigators using predefined inclusion and exclusion criteria. This yielded five papers for meta-analysis totaling 508 participants; 240 participants in the AR-groups and 268 participants in the control groups. (306 females/202 males). Meta-analysis showed no significant difference in anatomic test scores between the AR group and the control group (- 0.765 percentage-points (%-points); P = 0.732). Sub analysis on the use of AR vs. the use of traditional 2D teaching methods showed a significant disadvantage when using AR (- 5.685%-points; P = 0.024). Meta-regression analysis showed no significant co-relation between mean difference in test results and spatial abilities (as assessed by the mental rotations test scores). Student motivation and/or engagement could not be included since studies used different assessment tools. This meta-analysis showed that insufficient evidence is present to conclude AR significantly impacts learning outcome and that outcomes are significantly impacted by students' spatial abilities. However, only few papers were suitable for meta-analysis, indicating that there is a need for more well-designed, randomized-controlled trials on AR in anatomy education research.


Subject(s)
Anatomy/education , Augmented Reality , Female , Humans , Learning , Male
11.
Sci Rep ; 11(1): 12843, 2021 06 18.
Article in English | MEDLINE | ID: mdl-34145335

ABSTRACT

Neuroanatomy as a subject is important to learn, because a good understanding of neuroanatomy supports the establishment of a correct diagnosis in neurological patients. However, rapid changes in curricula reduced time assigned to study (neuro)anatomy. Therefore, it is important to find alternative teaching methods to study the complex three-dimensional structure of the brain. The aim of this manuscript was to explore the effectiveness of Virtual Reality (VR) in comparison with Radiological Data (RaD) as suitable learning methods to build knowledge and increase motivation for learning neuroanatomy. Forty-seven students (mean age of 19.47 ± 0.54 years; 43 females; 4 males) were included; 23 students comprised the VR group. Both methods showed to improve knowledge significantly, the improvement between groups was not different. The RaD group showed to have a significantly higher score on expectancy than students in the VR group. Task value scores regarding finding a task interesting, useful and fun were found to be significantly different in favor of the VR group. Consequently, significant higher Motivation scores were found in the VR group. Motivation and expectancy, however, did not moderate learning results, whereas task value impacted the results in favour of the VR group. This study concludes that VR and RaD are effective and diverting methods to learn neuroanatomy, with VR being more motivating than RaD. Future research should investigate motivation and task value when using VR over a longer period of time.

12.
J Craniomaxillofac Surg ; 49(9): 775-782, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33941437

ABSTRACT

The study aimed at developing a deep-learning (DL)-based algorithm to predict the virtual soft tissue profile after mandibular advancement surgery, and to compare its accuracy with the mass tensor model (MTM). Subjects who underwent mandibular advancement surgery were enrolled and divided into a training group and a test group. The DL model was trained using 3D photographs and CBCT data based on surgically achieved mandibular displacements (training group). Soft tissue simulations generated by DL and MTM based on the actual surgical jaw movements (test group) were compared with soft-tissue profiles on postoperative 3D photographs using distance mapping in terms of mean absolute error in the lower face, lower lip, and chin regions. 133 subjects were included - 119 in the training group and 14 in the test group. The mean absolute error for DL-based simulations of the lower face region was 1.0 ± 0.6 mm and was significantly lower (p = 0.02) compared with MTM-based simulations (1.5 ± 0.5 mm). CONCLUSION: The DL-based algorithm can predict 3D soft tissue profiles following mandibular advancement surgery. With a clinically acceptable mean absolute error. Therefore, it seems to be a relevant option for soft tissue prediction in orthognathic surgery. Therefore, it seems to be a relevant options.


Subject(s)
Deep Learning , Mandibular Advancement , Orthognathic Surgical Procedures , Cephalometry , Chin/anatomy & histology , Chin/diagnostic imaging , Chin/surgery , Humans , Imaging, Three-Dimensional , Lip/anatomy & histology , Mandible/diagnostic imaging , Mandible/surgery
13.
Neurosurgery ; 88(5): E427-E434, 2021 04 15.
Article in English | MEDLINE | ID: mdl-33548918

ABSTRACT

BACKGROUND: Predicting outcome after aneurysmal subarachnoid hemorrhage (aSAH) is known to be challenging and complex. Machine learning approaches, of which feedforward artificial neural networks (ffANNs) are the most widely used, could contribute to the patient-specific outcome prediction. OBJECTIVE: To investigate the prediction capacity of an ffANN for the patient-specific clinical outcome and the occurrence of delayed cerebral ischemia (DCI) and compare those results with the predictions of 2 internationally used scoring systems. METHODS: A prospective database was used to predict (1) death during hospitalization (ie, mortality) (n = 451), (2) unfavorable modified Rankin Scale (mRS) at 6 mo (n = 413), and (3) the occurrence of DCI (n = 362). Additionally, the predictive capacities of the ffANN were compared to those of Subarachnoid Haemorrhage International Trialists (SAHIT) and VASOGRADE to predict clinical outcome and occurrence of DCI. RESULTS: The area under the curve (AUC) of the ffANN showed to be 88%, 85%, and 72% for predicting mortality, an unfavorable mRS, and the occurrence of DCI, respectively. Sensitivity/specificity rates of the ffANN for mortality, unfavorable mRS, and the occurrence of DCI were 82%/80%, 94%/80%, and 74%/68%. The ffANN and SAHIT calculator showed similar AUCs for predicting personalized outcome. The presented ffANN and VASOGRADE were found to perform equally with regard to personalized prediction of occurrence of DCI. CONCLUSION: The presented ffANN showed equal performance when compared with VASOGRADE and SAHIT scoring systems while using less individual cases. The web interface launched simultaneously with the publication of this manuscript allows for usage of the ffANN-based prediction tool for individual data (https://nutshell-tool.com/).


Subject(s)
Artificial Intelligence , Patient-Specific Modeling , Subarachnoid Hemorrhage , Area Under Curve , Brain Ischemia , Humans , Subarachnoid Hemorrhage/complications , Subarachnoid Hemorrhage/mortality , Subarachnoid Hemorrhage/therapy , Treatment Outcome
14.
Sci Rep ; 10(1): 15346, 2020 09 18.
Article in English | MEDLINE | ID: mdl-32948813

ABSTRACT

Craniosynostosis is a condition in which cranial sutures fuse prematurely, causing problems in normal brain and skull growth in infants. To limit the extent of cosmetic and functional problems, swift diagnosis is needed. The goal of this study is to investigate if a deep learning algorithm is capable of correctly classifying the head shape of infants as either healthy controls, or as one of the following three craniosynostosis subtypes; scaphocephaly, trigonocephaly or anterior plagiocephaly. In order to acquire cranial shape data, 3D stereophotographs were made during routine pre-operative appointments of scaphocephaly (n = 76), trigonocephaly (n = 40) and anterior plagiocephaly (n = 27) patients. 3D Stereophotographs of healthy infants (n = 53) were made between the age of 3-6 months. The cranial shape data was sampled and a deep learning network was used to classify the cranial shape data as either: healthy control, scaphocephaly patient, trigonocephaly patient or anterior plagiocephaly patient. For the training and testing of the deep learning network, a stratified tenfold cross validation was used. During testing 195 out of 196 3D stereophotographs (99.5%) were correctly classified. This study shows that trained deep learning algorithms, based on 3D stereophotographs, can discriminate between craniosynostosis subtypes and healthy controls with high accuracy.


Subject(s)
Craniosynostoses/diagnostic imaging , Deep Learning , Imaging, Three-Dimensional/methods , Case-Control Studies , Facial Bones/diagnostic imaging , Head/abnormalities , Head/anatomy & histology , Humans , Infant , Photogrammetry
15.
Anat Sci Educ ; 13(3): 353-365, 2020 May.
Article in English | MEDLINE | ID: mdl-31269322

ABSTRACT

Neuroanatomy education is a challenging field which could benefit from modern innovations, such as augmented reality (AR) applications. This study investigates the differences on test scores, cognitive load, and motivation after neuroanatomy learning using AR applications or using cross-sections of the brain. Prior to two practical assignments, a pretest (extended matching questions, double-choice questions and a test on cross-sectional anatomy) and a mental rotation test (MRT) were completed. Sex and MRT scores were used to stratify students over the two groups. The two practical assignments were designed to study (1) general brain anatomy and (2) subcortical structures. Subsequently, participants completed a posttest similar to the pretest and a motivational questionnaire. Finally, a focus group interview was conducted to appraise participants' perceptions. Medical and biomedical students (n = 31); 19 males (61.3%) and 12 females (38.7%), mean age 19.2 ± 1.7 years participated in this experiment. Students who worked with cross-sections (n = 16) showed significantly more improvement on test scores than students who worked with GreyMapp-AR (P = 0.035) (n = 15). Further analysis showed that this difference was primarily caused by significant improvement on the cross-sectional questions. Students in the cross-section group, moreover, experienced a significantly higher germane (P = 0.009) and extraneous cognitive load (P = 0.016) than students in the GreyMapp-AR group. No significant differences were found in motivational scores. To conclude, this study suggests that AR applications can play a role in future anatomy education as an add-on educational tool, especially in learning three-dimensional relations of anatomical structures.


Subject(s)
Anatomy, Cross-Sectional/education , Augmented Reality , Education/methods , Neuroanatomy/education , Adolescent , Brain/anatomy & histology , Brain/blood supply , Brain/diagnostic imaging , Cognition , Curriculum , Dissection , Educational Measurement/statistics & numerical data , Female , Humans , Imaging, Three-Dimensional , Learning , Magnetic Resonance Angiography , Male , Program Evaluation , Students/psychology , Students/statistics & numerical data , Young Adult
16.
Sci Rep ; 9(1): 9007, 2019 06 21.
Article in English | MEDLINE | ID: mdl-31227772

ABSTRACT

The approximity of the inferior alveolar nerve (IAN) to the roots of lower third molars (M3) is a risk factor for the occurrence of nerve damage and subsequent sensory disturbances of the lower lip and chin following the removal of third molars. To assess this risk, the identification of M3 and IAN on dental panoramic radiographs (OPG) is mandatory. In this study, we developed and validated an automated approach, based on deep-learning, to detect and segment the M3 and IAN on OPGs. As a reference, M3s and IAN were segmented manually on 81 OPGs. A deep-learning approach based on U-net was applied on the reference data to train the convolutional neural network (CNN) in the detection and segmentation of the M3 and IAN. Subsequently, the trained U-net was applied onto the original OPGs to detect and segment both structures. Dice-coefficients were calculated to quantify the degree of similarity between the manually and automatically segmented M3s and IAN. The mean dice-coefficients for M3s and IAN were 0.947 ± 0.033 and 0.847 ± 0.099, respectively. Deep-learning is an encouraging approach to segment anatomical structures and later on in clinical decision making, though further enhancement of the algorithm is advised to improve the accuracy.


Subject(s)
Deep Learning , Mandibular Nerve/diagnostic imaging , Molar, Third/diagnostic imaging , Radiography, Panoramic/methods , Tooth Extraction , Humans , Reproducibility of Results , Risk Factors , Sensitivity and Specificity , Tooth, Impacted/diagnosis , Tooth, Impacted/diagnostic imaging , Trigeminal Nerve Injuries/diagnosis , Trigeminal Nerve Injuries/diagnostic imaging
17.
J Pain ; 20(9): 1015-1026, 2019 09.
Article in English | MEDLINE | ID: mdl-30771593

ABSTRACT

Implantable motor cortex stimulation (iMCS) has been performed for >25 years to treat various intractable pain syndromes. Its effectiveness is highly variable and, although various studies revealed predictive variables, none of these were found repeatedly. This study uses neural network analysis (NNA) to identify predictive factors of iMCS treatment for intractable pain. A systematic review provided a database of patient data on an individual level of patients who underwent iMCS to treat refractory pain between 1991 and 2017. Responders were defined as patients with a pain relief of >40% as measured by a numerical rating scale (NRS) score. NNA was carried out to predict the outcome of iMCS and to identify predictive factors that impacted the outcome of iMCS. The outcome prediction value of the NNA was expressed as the mean accuracy, sensitivity, and specificity. The NNA furthermore provided the mean weight of predictive variables, which shows the impact of the predictive variable on the prediction. The mean weight was converted into the mean relative influence (M), a value that varies between 0 and 100%. A total of 358 patients were included (202 males [56.4%]; mean age, 54.2 ±13.3 years), 201 of whom were responders to iMCS. NNA had a mean accuracy of 66.3% and a sensitivity and specificity of 69.8% and 69.4%, respectively. NNA further identified 6 predictive variables that had a relatively high M: 1) the sex of the patient (M = 19.7%); 2) the origin of the lesion (M = 15.1%); 3) the preoperative numerical rating scale score (M = 9.2%); 4) preoperative use of repetitive transcranial magnetic stimulation (M = 7.3%); 5) preoperative intake of opioids (M = 7.1%); and 6) the follow-up period (M = 13.1%). The results from the present study show that these 6 predictive variables influence the outcome of iMCS and that, based on these variables, a fair prediction model can be built to predict outcome after iMCS surgery. PERSPECTIVE: The presented NNA analyzed the functioning of computational models and modeled nonlinear statistical data. Based on this NNA, 6 predictive variables were identified that are suggested to be of importance in the improvement of future iMCS to treat chronic pain.


Subject(s)
Chronic Pain/therapy , Motor Cortex/physiopathology , Pain Management , Pain, Intractable/therapy , Chronic Pain/physiopathology , Electric Stimulation Therapy , Humans , Pain Measurement , Pain, Intractable/physiopathology , Prognosis
19.
Eur Radiol ; 29(5): 2724-2726, 2019 May.
Article in English | MEDLINE | ID: mdl-30413952

ABSTRACT

KEY POINTS: Use of algorithms to generate synthetic cases might result in a misrepresentation of the entire population. Training an artificial neural network with a mix of real and synthetic data might lead to non-realistic prediction precision.


Subject(s)
Algorithms , Aneurysm, Ruptured/diagnosis , Intracranial Aneurysm/diagnosis , Neural Networks, Computer , Female , Humans , Male
20.
Interact J Med Res ; 7(2): e16, 2018 Oct 12.
Article in English | MEDLINE | ID: mdl-30314961

ABSTRACT

BACKGROUND: The publication rate of neurosurgical guidelines has increased tremendously over the past decade; however, only a small proportion of clinical decisions appear to be based on high-quality evidence. OBJECTIVE: The aim was to evaluate the evidence available within neurosurgery and its value within clinical practice according to neurosurgeons. METHODS: A Web-based survey was sent to 2552 neurosurgeons, who were members of the European Association of Neurosurgical Societies. RESULTS: The response rate to the survey was 6.78% (173/2552). According to 48.6% (84/173) of the respondents, neurosurgery clinical practices are based on less evidence than other medical specialties and not enough high-quality evidence is available; however, 84.4% (146/173) of the respondents believed neurosurgery is amenable to evidence. Of the respondents, 59.0% (102/173) considered the neurosurgical guidelines in their hospital to be based on high-quality evidence, most of whom considered their own treatments to be based on high-quality (level I and/or level II) data (84.3%, 86/102; significantly more than for the neurosurgeons who did not consider the hospital guidelines to be based on high-quality evidence: 55%, 12/22; P<.001). Also, more neurosurgeons with formal training believed they could understand, criticize, and interpret statistical outcomes presented in journals than those without formal training (93%, 56/60 and 68%, 57/84 respectively; P<.001). CONCLUSIONS: According to the respondents, neurosurgery is based on high-quality evidence less often than other medical specialties. The results of the survey indicate that formal training in evidence-based medicine would enable neurosurgeons to better understand, criticize, and interpret statistical outcomes presented in journals.

SELECTION OF CITATIONS
SEARCH DETAIL
...