Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Acta Orthop ; 95: 340-347, 2024 06 18.
Article in English | MEDLINE | ID: mdl-38888052

ABSTRACT

BACKGROUND AND PURPOSE: Artificial intelligence (AI) has the potential to aid in the accurate diagnosis of hip fractures and reduce the workload of clinicians. We primarily aimed to develop and validate a convolutional neural network (CNN) for the automated classification of hip fractures based on the 2018 AO-OTA classification system. The secondary aim was to incorporate the model's assessment of additional radiographic findings that often accompany such injuries. METHODS: 6,361 plain radiographs of the hip taken between 2002 and 2016 at Danderyd University Hospital were used to train the CNN. A separate set of 343 radiographs representing 324 unique patients was used to test the performance of the network. Performance was evaluated using area under the curve (AUC), sensitivity, specificity, and Youden's index. RESULTS: The CNN demonstrated high performance in identifying and classifying hip fracture, with AUCs ranging from 0.76 to 0.99 for different fracture categories. The AUC for hip fractures ranged from 0.86 to 0.99, for distal femur fractures from 0.76 to 0.99, and for pelvic fractures from 0.91 to 0.94. For 29 of 39 fracture categories, the AUC was ≥ 0.95. CONCLUSION: We found that AI has the potential for accurate and automated classification of hip fractures based on the AO-OTA classification system. Further training and modification of the CNN may enable its use in clinical settings.


Subject(s)
Artificial Intelligence , Hip Fractures , Neural Networks, Computer , Humans , Hip Fractures/classification , Hip Fractures/diagnostic imaging , Male , Female , Aged , Radiography , Sensitivity and Specificity , Aged, 80 and over , Middle Aged
2.
BMC Musculoskelet Disord ; 22(1): 844, 2021 Oct 02.
Article in English | MEDLINE | ID: mdl-34600505

ABSTRACT

BACKGROUND: Prevalence for knee osteoarthritis is rising in both Sweden and globally due to increased age and obesity in the population. This has subsequently led to an increasing demand for knee arthroplasties. Correct diagnosis and classification of a knee osteoarthritis (OA) are therefore of a great interest in following-up and planning for either conservative or operative management. Most orthopedic surgeons rely on standard weight bearing radiographs of the knee. Improving the reliability and reproducibility of these interpretations could thus be hugely beneficial. Recently, deep learning which is a form of artificial intelligence (AI), has been showing promising results in interpreting radiographic images. In this study, we aim to evaluate how well an AI can classify the severity of knee OA, using entire image series and not excluding common visual disturbances such as an implant, cast and non-degenerative pathologies. METHODS: We selected 6103 radiographic exams of the knee taken at Danderyd University Hospital between the years 2002-2016 and manually categorized them according to the Kellgren & Lawrence grading scale (KL). We then trained a convolutional neural network (CNN) of ResNet architecture using PyTorch. We evaluated the results against a test set of 300 exams that had been reviewed independently by two senior orthopedic surgeons who settled eventual interobserver disagreements through consensus sessions. RESULTS: The CNN yielded an overall AUC of more than 0.87 for all KL grades except KL grade 2, which yielded an AUC of 0.8 and a mean AUC of 0.92. When merging adjacent KL grades, all but one group showed near perfect results with AUC > 0.95 indicating excellent performance. CONCLUSION: We have found that we could teach a CNN to correctly diagnose and classify the severity of knee OA using the KL grading system without cleaning the input data from major visual disturbances such as implants and other pathologies.


Subject(s)
Deep Learning , Osteoarthritis, Knee , Adult , Artificial Intelligence , Humans , Knee Joint , Osteoarthritis, Knee/diagnostic imaging , Osteoarthritis, Knee/epidemiology , Osteoarthritis, Knee/surgery , Reproducibility of Results
3.
Acta Orthop ; 92(1): 102-108, 2021 02.
Article in English | MEDLINE | ID: mdl-33103536

ABSTRACT

Background and purpose - Classification of ankle fractures is crucial for guiding treatment but advanced classifications such as the AO Foundation/Orthopedic Trauma Association (AO/OTA) are often too complex for human observers to learn and use. We have therefore investigated whether an automated algorithm that uses deep learning can learn to classify radiographs according to the new AO/OTA 2018 standards.Method - We trained a neural network based on the ResNet architecture on 4,941 radiographic ankle examinations. All images were classified according to the AO/OTA 2018 classification. A senior orthopedic surgeon (MG) then re-evaluated all images with fractures. We evaluated the network against a test set of 400 patients reviewed by 2 expert observers (MG, AS) independently.Results - In the training dataset, about half of the examinations contained fractures. The majority of the fractures were malleolar, of which the type B injuries represented almost 60% of the cases. Average area under the area under the receiver operating characteristic curve (AUC) was 0.90 (95% CI 0.82-0.94) for correctly classifying AO/OTA class where the most common major fractures, the malleolar type B fractures, reached an AUC of 0.93 (CI 0.90-0.95). The poorest performing type was malleolar A fractures, which included avulsions of the fibular tip.Interpretation - We found that a neural network could attain the required performance to aid with a detailed ankle fracture classification. This approach could be scaled up to other body parts. As the type of fracture is an important part of orthopedic decision-making, this is an important step toward computer-assisted decision-making.


Subject(s)
Ankle Fractures/classification , Ankle Fractures/diagnostic imaging , Deep Learning , Algorithms , Humans , Radiography , Sweden
4.
Acta Orthop ; 88(6): 581-586, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28681679

ABSTRACT

Background and purpose - Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs. Methods - We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd's Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network's performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network. Results - All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen's kappa under these conditions was 0.76. Interpretation - This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics.


Subject(s)
Artificial Intelligence , Fractures, Bone/diagnosis , Radiographic Image Enhancement , Radiography/methods , Humans , Reproducibility of Results
5.
J Educ Health Promot ; 13: 165, 2024.
Article in English | MEDLINE | ID: mdl-39268417

ABSTRACT

INTRODUCTION: The triage process of patients in emergency departments is done by nurses in Iran. it is necessary to pay attention to the ability of nurses in patients' triage in order to have a correct picture of the status of the emergency department, so the aims of this study is to investigate the quality of nurses' triage using the Emergency Severity Index (ESI) method and related factors. MATERIALS AND METHODS: This is a descriptive study which was performed on all 900 patients referring to the emergency department during 12 months from 2019 to 2020 in the Triage unit of two trauma center hospitals affiliated to Isfahan university of medical sciences. Data collection tools included patients' demographic, nurses' demographic and occupational checklist, and ESI Triage Form. To analyze the data, SPSS software was used, descriptive and analytic statistics were used, P < 0.05 was considered statistically significant. RESULTS: No significant difference was observed between the quality level of triage by nurses and physicians (P > 0.05), the results of independent t-test showed that nurses in the over triage group have a higher average age and work experience. In the under triage level, the frequency of female nurses was significantly higher than male nurses (P < 0/05). CONCLUSION: Accurate and fast triage of patients is the key to successful performance in the emergency department. Therefore correct implementation of triage and identifying the need for nurses for training and identifying existing deficiencies are of utmost importance.

6.
PLoS One ; 16(4): e0248809, 2021.
Article in English | MEDLINE | ID: mdl-33793601

ABSTRACT

BACKGROUND: Fractures around the knee joint are inherently complex in terms of treatment; complication rates are high, and they are difficult to diagnose on a plain radiograph. An automated way of classifying radiographic images could improve diagnostic accuracy and would enable production of uniformly classified records of fractures to be used in researching treatment strategies for different fracture types. Recently deep learning, a form of artificial intelligence (AI), has shown promising results for interpreting radiographs. In this study, we aim to evaluate how well an AI can classify knee fractures according to the detailed 2018 AO-OTA fracture classification system. METHODS: We selected 6003 radiograph exams taken at Danderyd University Hospital between the years 2002-2016, and manually categorized them according to the AO/OTA classification system and by custom classifiers. We then trained a ResNet-based neural network on this data. We evaluated the performance against a test set of 600 exams. Two senior orthopedic surgeons had reviewed these exams independently where we settled exams with disagreement through a consensus session. RESULTS: We captured a total of 49 nested fracture classes. Weighted mean AUC was 0.87 for proximal tibia fractures, 0.89 for patella fractures and 0.89 for distal femur fractures. Almost ¾ of AUC estimates were above 0.8, out of which more than half reached an AUC of 0.9 or above indicating excellent performance. CONCLUSION: Our study shows that neural networks can be used not only for fracture identification but also for more detailed classification of fractures around the knee joint.


Subject(s)
Artificial Intelligence , Femoral Fractures/diagnostic imaging , Image Processing, Computer-Assisted/methods , Tibial Fractures/diagnostic imaging , Humans
7.
IEEE Trans Pattern Anal Mach Intell ; 38(9): 1790-802, 2016 09.
Article in English | MEDLINE | ID: mdl-26584488

ABSTRACT

Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed.

SELECTION OF CITATIONS
SEARCH DETAIL