Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
Sci Rep ; 14(1): 7696, 2024 04 02.
Article in English | MEDLINE | ID: mdl-38565576

ABSTRACT

The modified total Sharp score (mTSS) is often used as an evaluation index for joint destruction caused by rheumatoid arthritis. In this study, special findings (ankylosis, subluxation, and dislocation) are detected to estimate the efficacy of mTSS by using deep neural networks (DNNs). The proposed method detects and classifies finger joint regions using an ensemble mechanism. This integrates multiple DNN detection models, specifically single shot multibox detectors, using different training data for each special finding. For the learning phase, we prepared a total of 260 hand X-ray images, in which proximal interphalangeal (PIP) and metacarpophalangeal (MP) joints were annotated with mTSS by skilled rheumatologists and radiologists. We evaluated our model using five-fold cross-validation. The proposed model produced a higher detection accuracy, recall, precision, specificity, F-value, and intersection over union than individual detection models for both ankylosis and subluxation detection, with a detection rate above 99.8% for the MP and PIP joint regions. Our future research will aim at the development of an automatic diagnosis system that uses the proposed mTSS model to estimate the erosion and joint space narrowing score.


Subject(s)
Ankylosis , Joint Dislocations , Humans , Radiography , Hand/diagnostic imaging , Finger Joint , Neural Networks, Computer , Ankylosis/diagnostic imaging , Joint Dislocations/diagnostic imaging
2.
PLoS One ; 18(2): e0281088, 2023.
Article in English | MEDLINE | ID: mdl-36780446

ABSTRACT

We propose a wrist joint subluxation/ankylosis classification model for an automatic radiographic scoring system for X-ray images. In managing rheumatoid arthritis, the evaluation of joint destruction is important. The modified total Sharp score (mTSS), which is conventionally used to evaluate joint destruction of the hands and feet, should ideally be automated because the required time depends on the skill of the evaluator, and there is variability between evaluators. Since joint subluxation and ankylosis are given a large score in mTSS, we aimed to estimate subluxation and ankylosis using a deep neural network as a first step in developing an automatic radiographic scoring system for joint destruction. We randomly extracted 216 hand X-ray images from an electronic medical record system for the learning experiments. These images were acquired from patients who visited the rheumatology department of Keio University Hospital in 2015. Using our newly developed annotation tool, well-trained rheumatologists and radiologists labeled the mTSS to the wrist, metacarpal phalangeal joints, and proximal interphalangeal joints included in the images. We identified 21 X-ray images containing one or more subluxation joints and 42 X-ray images with ankylosis. To predict subluxation/ankylosis, we conducted five-fold cross-validation with deep neural network models: AlexNet, ResNet, DenseNet, and Vision Transformer. The best performance on wrist subluxation/ankylosis classification was as follows: accuracy, precision, recall, F1 value, and AUC were 0.97±0.01/0.89±0.04, 0.92±0.12/0.77±0.15, 0.77±0.16/0.71±0.13, 0.82±0.11/0.72±0.09, and 0.92±0.08/0.85±0.07, respectively. The classification model based on a deep neural network was trained with a relatively small dataset; however, it showed good accuracy. In conclusion, we provided data collection and model training schemes for mTSS prediction and showed an important contribution to building an automated scoring system.


Subject(s)
Ankylosis , Arthritis, Rheumatoid , Deep Learning , Hand Joints , Joint Dislocations , Humans , Arthritis, Rheumatoid/diagnostic imaging , Ankylosis/diagnostic imaging , Joint Dislocations/diagnostic imaging
3.
Neural Netw ; 155: 119-143, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36054984

ABSTRACT

The training data distribution is often biased towards objects in certain orientations and illumination conditions. While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations and illuminations, Deep Neural Networks (DNNs) severely suffer in this case, even when large amounts of training examples are available. Neurons that are invariant to orientations and illuminations have been proposed as a neural mechanism that could facilitate OoD generalization, but it is unclear how to encourage the emergence of such invariant neurons. In this paper, we investigate three different approaches that lead to the emergence of invariant neurons and substantially improve DNNs in recognizing objects in OoD orientations and illuminations. Namely, these approaches are (i) training much longer after convergence of the in-distribution (InD) validation accuracy, i.e., late-stopping, (ii) tuning the momentum parameter of the batch normalization layers, and (iii) enforcing invariance of the neural activity in an intermediate layer to orientation and illumination conditions. Each of these approaches substantially improves the DNN's OoD accuracy (more than 20% in some cases). We report results in four datasets: two datasets are modified from the MNIST and iLab datasets, and the other two are novel (one of 3D rendered cars and another of objects taken from various controlled orientations and illumination conditions). These datasets allow to study the effects of different amounts of bias and are challenging as DNNs perform poorly in OoD conditions. Finally, we demonstrate that even though the three approaches focus on different aspects of DNNs, they all tend to lead to the same underlying neural mechanism to enable OoD accuracy gains - individual neurons in the intermediate layers become invariant to OoD orientations and illuminations. We anticipate this study to be a basis for further improvement of deep neural networks' OoD generalization performance, which is highly demanded to achieve safe and fair AI applications.


Subject(s)
Lighting , Pattern Recognition, Visual , Humans , Pattern Recognition, Visual/physiology , Photic Stimulation , Neurons/physiology , Neural Networks, Computer
4.
Sensors (Basel) ; 21(11)2021 May 31.
Article in English | MEDLINE | ID: mdl-34072738

ABSTRACT

In many robotics studies, deep neural networks (DNNs) are being actively studied due to their good performance. However, existing robotic techniques and DNNs have not been systematically integrated, and packages for beginners are yet to be developed. In this study, we proposed a basic educational kit for robotic system development with DNNs. Our goal was to educate beginners in both robotics and machine learning, especially the use of DNNs. Initially, we required the kit to (1) be easy to understand, (2) employ experience-based learning, and (3) be applicable in many areas. To clarify the learning objectives and important parts of the basic educational kit, we analyzed the research and development (R&D) of DNNs and divided the process into three steps of data collection (DC), machine learning (ML), and task execution (TE). These steps were configured under a hierarchical system flow with the ability to be executed individually at the development stage. To evaluate the practicality of the proposed system flow, we implemented it for a physical robotic grasping system using robotics middleware. We also demonstrated that the proposed system can be effectively applied to other hardware, sensor inputs, and robot tasks.

5.
Int J Med Inform ; 141: 104231, 2020 09.
Article in English | MEDLINE | ID: mdl-32682317

ABSTRACT

BACKGROUND: Automated classification of glomerular pathological findings is potentially beneficial in establishing an efficient and objective diagnosis in renal pathology. While previous studies have verified the artificial intelligence (AI) models for the classification of global sclerosis and glomerular cell proliferation, there are several other glomerular pathological findings required for diagnosis, and the comprehensive models for the classification of these major findings have not yet been reported. Whether the cooperation between these AI models and clinicians improves diagnostic performance also remains unknown. Here, we developed AI models to classify glomerular images for major findings required for pathological diagnosis and investigated whether those models could improve the diagnostic performance of nephrologists. METHODS: We used a dataset of 283 kidney biopsy cases comprising 15,888 glomerular images that were annotated by a total of 25 nephrologists. AI models to classify seven pathological findings: global sclerosis, segmental sclerosis, endocapillary proliferation, mesangial matrix accumulation, mesangial cell proliferation, crescent, and basement membrane structural changes, were constructed using deep learning by fine-tuning of InceptionV3 convolutional neural network. Subsequently, we compared the agreement to truth labels between majority decision among nephrologists with or without the AI model as a voter. RESULTS: Our model for global sclerosis showed high performance (area under the curve: periodic acid-Schiff, 0.986; periodic acid methenamine silver, 0.983); the models for the other findings also showed performance close to those of nephrologists. By adding the AI model output to majority decision among nephrologists, out of the 14 constructed models, the results of the majority decision showed improvement in sensitivity for 10 models (four of them were statistically significant) and specificity for eight models (five significant). CONCLUSION: Our study showed a proof-of-concept for the classification of multiple glomerular findings in a comprehensive method of deep learning and suggested its potential effectiveness in improving diagnostic accuracy of clinicians.


Subject(s)
Artificial Intelligence , Deep Learning , Humans , Intelligence , Nephrologists , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL