Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Food Chem ; 448: 139062, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38531297

ABSTRACT

Avenanthramide-C (AVN-C) is the biomarker for oat with a variety of physiological functions, whereas its application is constrained by low stability and bioavailability. Avenanthramide-C is the biomarker for oat with a variety of physiological functions, whereas its application is constrained by low stability and bioavailability. This study evaluated the potential of yeast cell (YC) and yeast cell wall (YCW) capsules as delivery systems for stabilizing AVN-C. It was observed that these yeast capsules possessed the ellipsoidal morphology and intact structure without visual pores. Additionally, the YCW capsules exhibited higher encapsulation and loading capacity due to the large internal space. The interaction of yeast capsules with AVN-C involved the hydrophobic interactions and hydrogen bonding. Moreover, the loading of AVN-C induced high hydrophobicity inside the yeast capsules, which helped to protect AVN-C against degradation and release AVN-C in a slow and sustained manner in the simulated gastrointestinal tract. The YCW capsules have potential as controlled delivery system for AVN-C, which could be further used as a nutraceutical and added to functional foods.


Subject(s)
Avena , Capsules , Cell Wall , Saccharomyces cerevisiae , ortho-Aminobenzoates , Avena/chemistry , ortho-Aminobenzoates/chemistry , Capsules/chemistry , Cell Wall/chemistry , Saccharomyces cerevisiae/chemistry , Saccharomyces cerevisiae/metabolism , Biomarkers , Hydrophobic and Hydrophilic Interactions
2.
J Cancer Res Clin Oncol ; 150(2): 87, 2024 Feb 09.
Article in English | MEDLINE | ID: mdl-38336926

ABSTRACT

PURPOSE: To assess the performance of radiomics-based analysis of contrast-enhanced computerized tomography (CE-CT) images for distinguishing GS from gastric GIST. METHODS: Forty-nine patients with GS and two hundred fifty-three with GIST were enrolled in this retrospective study. CT features were evaluated by two associate chief radiologists. Radiomics features were extracted from portal venous phase images using Pyradiomics software. A non-radiomics dataset (combination of clinical characteristics and radiologist-determined CT features) and a radiomics dataset were used to build stepwise logistic regression and least absolute shrinkage and selection operator (LASSO) logistic regression models, respectively. Model performance was evaluated according to sensitivity, specificity, accuracy, and receiver operating characteristic (ROC) curve, and Delong's test was applied to compare the area under the curve (AUC) between different models. RESULTS: A total of 1223 radiomics features were extracted from portal venous phase images. After reducing dimensions by calculating Pearson correlation coefficients (PCCs), 20 radiomics features, 20 clinical characteristics + CT features were used to build the models, respectively. The AUC values for the models using radiomics features and those using clinical features were more than 0.900 for both the training and validation groups. There were no significant differences in predictive performance between the radiomic and clinical data models according to Delong's test. CONCLUSION: A radiomics-based model applied to CE-CT images showed comparable predictive performance to senior physicians in the differentiation of GS from GIST.


Subject(s)
Gastrointestinal Stromal Tumors , Neurilemmoma , Stomach Neoplasms , Humans , Gastrointestinal Stromal Tumors/diagnostic imaging , Radiomics , Retrospective Studies , Stomach Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
3.
Eur Radiol ; 2023 Nov 08.
Article in English | MEDLINE | ID: mdl-37938384

ABSTRACT

OBJECTIVES: Aimed to develop a nomogram model based on deep learning features and radiomics features for the prediction of early hematoma expansion. METHODS: A total of 561 cases of spontaneous intracerebral hemorrhage (sICH) with baseline Noncontrast Computed Tomography (NCCT) were included. The metrics of hematoma detection were evaluated by Intersection over Union (IoU), Dice coefficient (Dice), and accuracy (ACC). The semantic features of sICH were judged by EfficientNet-B0 classification model. Radiomics analysis was performed based on the region of interest which was automatically segmented by deep learning. A combined model was constructed in order to predict the early expansion of hematoma using multivariate binary logistic regression, and a nomogram and calibration curve were drawn to verify its predictive efficacy by ROC analysis. RESULTS: The accuracy of hematoma detection by segmentation model was 98.2% for IoU greater than 0.6 and 76.5% for IoU greater than 0.8 in the training cohort. In the validation cohort, the accuracy was 86.6% for IoU greater than 0.6 and 70.0% for IoU greater than 0.8. The AUCs of the deep learning model to judge semantic features were 0.95 to 0.99 in the training cohort, while in the validation cohort, the values were 0.71 to 0.83. The deep learning radiomics model showed a better performance with higher AUC in training cohort (0.87), internal validation cohort (0.83), and external validation cohort (0.82) than either semantic features or Radscore. CONCLUSION: The combined model based on deep learning features and radiomics features has certain efficiency for judging the risk grade of hematoma. CLINICAL RELEVANCE STATEMENT: Our study revealed that the deep learning model can significantly improve the work efficiency of segmentation and semantic feature classification of spontaneous intracerebral hemorrhage. The combined model has a good prediction efficiency for early hematoma expansion. KEY POINTS: • We employ a deep learning algorithm to perform segmentation and semantic feature classification of spontaneous intracerebral hemorrhage and construct a prediction model for early hematoma expansion. • The deep learning radiomics model shows a favorable performance for the prediction of early hematoma expansion. • The combined model holds the potential to be used as a tool in judging the risk grade of hematoma.

4.
Eur Spine J ; 2023 Oct 03.
Article in English | MEDLINE | ID: mdl-37787781

ABSTRACT

PURPOSE: To develop a deep learning-based cascaded HRNet model, in order to automatically measure X-ray imaging parameters of lumbar sagittal curvature and to evaluate its prediction performance. METHODS: A total of 3730 lumbar lateral digital radiography (DR) images were collected from picture archiving and communication system (PACS). Among them, 3150 images were randomly selected as the training dataset and validation dataset, and 580 images as the test dataset. The landmarks of the lumbar curve index (LCI), lumbar lordosis angle (LLA), sacral slope (SS), lumbar lordosis index (LLI), and the posterior edge tangent angle of the vertebral body (PTA) were identified and marked. The measured results of landmarks on the test dataset were compared with the mean values of manual measurement as the reference standard. Percentage of correct key-points (PCK), intra-class correlation coefficient (ICC), Pearson correlation coefficient (r), mean absolute error (MAE), mean square error (MSE), root-mean-square error (RMSE), and Bland-Altman plot were used to evaluate the performance of the cascade HRNet model. RESULTS: The PCK of the cascaded HRNet model was 97.9-100% in the 3 mm distance threshold. The mean differences between the reference standard and the predicted values for LCI, LLA, SS, LLI, and PTA were 0.43 mm, 0.99°, 1.11°, 0.01 mm, and 0.23°, respectively. There were strong correlation and consistency of the five parameters between the cascaded HRNet model and manual measurements (ICC = 0.989-0.999, R = 0.991-0.999, MAE = 0.63-1.65, MSE = 0.61-4.06, RMSE = 0.78-2.01). CONCLUSION: The cascaded HRNet model based on deep learning algorithm could accurately identify the sagittal curvature-related landmarks on lateral lumbar DR images and automatically measure the relevant parameters, which is of great significance in clinical application.

5.
BMC Musculoskelet Disord ; 23(1): 967, 2022 Nov 08.
Article in English | MEDLINE | ID: mdl-36348426

ABSTRACT

BACKGROUND: The analysis of sagittal intervertebral rotational motion (SIRM) can provide important information for the evaluation of cervical diseases. Deep learning has been widely used in spinal parameter measurements, however, there are few investigations on spinal motion analysis. The purpose of this study is to develop a deep learning-based model for fully automated measurement of SIRM based on flexion-neutral-extension cervical lateral radiographs and to evaluate its applicability for the flexion-extension (F/E), flexion-neutral (F/N), and neutral-extension (N/E) motion analysis. METHODS: A total of 2796 flexion, neutral, and extension cervical lateral radiographs from 932 patients were analyzed. Radiographs from 100 patients were randomly selected as the test set, and those from the remaining 832 patients were used for training and validation. Landmarks were annotated for measuring SIRM at five segments from C2/3 to C6/7 on F/E, F/N, and N/E motion. High-Resolution Net (HRNet) was used as the main structure to train the landmark detection network. Landmark performance was assessed according to the percentage of correct key points (PCK) and mean of the percentage of correct key points (MPCK). Measurement performance was evaluated by intra-class correlation coefficient (ICC), Pearson correlation coefficient, mean absolute error (MAE), root mean square error (RMSE), and Bland-Altman plots. RESULTS: At a 2-mm distance threshold, the PCK for the model ranged from 94 to 100%. Compared with the reference standards, the model showed high accuracy for SIRM measurements for all segments on F/E and F/N motion. On N/E motion, the model provided reliable measurements from C3/4 to C6/7, but not C2/3. Compared with the radiologists' measurements, the model showed similar performance to the radiologists. CONCLUSIONS: The developed model can automatically measure SIRM on flexion-neutral-extension cervical lateral radiographs and showed comparable performance with radiologists. It may provide rapid, accurate, and comprehensive information for cervical motion analysis.


Subject(s)
Cervical Vertebrae , Deep Learning , Humans , Cervical Vertebrae/diagnostic imaging , Radiography , Range of Motion, Articular , Neck
6.
Eur Radiol ; 32(11): 7680-7690, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35420306

ABSTRACT

OBJECTIVES: Develop and evaluate the performance of deep learning and linear regression cascade algorithms for automated assessment of the image layout and position of chest radiographs. METHODS: This retrospective study used 10 quantitative indices to capture subjective perceptions of radiologists regarding image layout and position of chest radiographs, including the chest edges, field of view (FOV), clavicles, rotation, scapulae, and symmetry. An automated assessment system was developed using a training dataset consisting of 1025 adult posterior-anterior chest radiographs. The evaluation steps included: (i) use of a CNN framework based on ResNet - 34 to obtain measurement parameters for quantitative indices and (ii) analysis of quantitative indices using a multiple linear regression model to obtain predicted scores for the layout and position of chest radiograph. In the testing dataset (n = 100), the performance of the automated system was evaluated using the intraclass correlation coefficient (ICC), Pearson correlation coefficient (r), mean absolute difference (MAD), and mean absolute percentage error (MAPE). RESULTS: The stepwise regression showed a statistically significant relationship between the 10 quantitative indices and subjective scores (p < 0.05). The deep learning model showed high accuracy in predicting the quantitative indices (ICC = 0.82 to 0.99, r = 0.69 to 0.99, MAD = 0.01 to 2.75). The automatic system provided assessments similar to the mean opinion scores of radiologists regarding image layout (MAPE = 3.05%) and position (MAPE = 5.72%). CONCLUSIONS: Ten quantitative indices correlated well with the subjective perceptions of radiologists regarding the image layout and position of chest radiographs. The automated system provided high performance in measuring quantitative indices and assessing image quality. KEY POINTS: • Objective and reliable assessment for image quality of chest radiographs is important for improving image quality and diagnostic accuracy. • Deep learning can be used for automated measurements of quantitative indices from chest radiographs. • Linear regression can be used for interpretation-based quality assessment of chest radiographs.


Subject(s)
Deep Learning , Adult , Humans , Radiography, Thoracic/methods , Linear Models , Retrospective Studies , Algorithms
7.
Acad Radiol ; 29(10): 1541-1551, 2022 10.
Article in English | MEDLINE | ID: mdl-35131147

ABSTRACT

RATIONALE AND OBJECTIVES: To develop an automatic setting of a deep learning-based system for detecting low-dose computed tomography (CT) lung cancer screening scan range and compare its efficiency with the radiographer's performance. MATERIALS AND METHODS: This retrospective study was performed using 1984 lung cancer screening low-dose CT scans obtained between November 2019 and May 2020. Among 1984 CT scans, 600 CT scans were considered suitable for an observational study to explore the relationship between the scout landmarks and the actual lung boundaries. Further, 1144 CT scans data set was used for the development of a deep learning-based algorithm. This data set was split into an 8:2 ratio divided into a training set (80%, n = 915) and a validation set (20%, n = 229). The performance of the deep learning algorithm was evaluated in the test set (n = 240) using actual lung boundaries and radiographers' scan ranges. RESULTS: The mean differences between the upper and lower boundaries of the deep learning-based algorithm and the actual lung boundaries were 4.72 ± 3.15 mm and 16.50 ± 14.06 mm, respectively. The accuracy and over-scanning of the scan ranges generated by the system were 97.08% (233/240) and 0% (0/240) for the upper boundary, and 96.25% (231/240) and 29.58% (71/240) for the lower boundary. CONCLUSION: The developed deep learning-based algorithm system can effectively predict lung cancer screening low-dose CT scan range with high accuracy using only the frontal scout.


Subject(s)
Deep Learning , Lung Neoplasms , Early Detection of Cancer , Humans , Lung Neoplasms/diagnostic imaging , Retrospective Studies , Tomography, X-Ray Computed/methods
8.
Eur J Med Res ; 27(1): 13, 2022 Jan 25.
Article in English | MEDLINE | ID: mdl-35078525

ABSTRACT

BACKGROUND: The coronavirus disease 2019 (COVID-19) is a pandemic now, and the severity of COVID-19 determines the management, treatment, and even prognosis. We aim to develop and validate a radiomics nomogram for identifying patients with severe COVID-19. METHODS: There were 156 and 104 patients with COVID-19 enrolled in primary and validation cohorts, respectively. Radiomics features were extracted from chest CT images. Least absolute shrinkage and selection operator (LASSO) method was used for feature selection and radiomics signature building. Multivariable logistic regression analysis was used to develop a predictive model, and the radiomics signature, abnormal WBC counts, and comorbidity were incorporated and presented as a radiomics nomogram. The performance of the nomogram was assessed through its calibration, discrimination, and clinical usefulness. RESULTS: The radiomics signature consisting of four selected features was significantly associated with clinical condition of patients with COVID-19 in the primary and validation cohorts (P < 0.001). The radiomics nomogram including radiomics signature, comorbidity and abnormal WBC counts showed good discrimination of severe COVID-19, with an AUC of 0.972, and good calibration in the primary cohort. Application of the nomogram in the validation cohort still gave good discrimination with an AUC of 0.978 and good calibration. Decision curve analysis demonstrated that the radiomics nomogram was clinically useful to identify the severe COVID-19. CONCLUSION: We present an easy-to-use radiomics nomogram to identify the patients with severe COVID-19 for better guiding a prompt management and treatment.


Subject(s)
COVID-19/diagnosis , COVID-19/pathology , Nomograms , SARS-CoV-2/pathogenicity , Adult , Cohort Studies , Female , Humans , Male , Middle Aged , Prognosis , Retrospective Studies , Tomography, X-Ray Computed/methods
9.
Eur J Radiol ; 146: 110071, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34864427

ABSTRACT

PURPOSE: To develop a deep learning-based model for measuring automatic lumbosacral anatomical parameters from lateral lumbar radiographs and compare its performance to that of attending-level radiologists. METHODS: A total of 1791 lateral lumbar radiographs were collected through the PACS system and used to develop the deep learning-based model. Landmarks for the four used parameters, including the lumbosacral lordosis angle (LSLA), lumbosacral angle (LSA), sacral horizontal angle (SHA), and sacral inclination angle (SIA), were identified and automatically labeled by the model. At the same time, the measurement results were obtained through landmarks on the test set compared to manual measurements as the reference standard. Statistical analyses of the Percentage of Correct Key Points (PCK), intra-class correlation coefficient (ICC), Pearson correlation coefficient, mean absolute error (MAE), root mean square error (RMSE), and Bland-Altman plots were performed to evaluate the performance of the model. RESULTS: The mean differences between the reference standard and the model for LSLA, LSA, SHA, and SIA, were 0.39°, 0.09°, 0.13°, and 0.12°, respectively. A strong correlation and consistency between the four parameters were found between the model and reference standard (ICC = 0.92-0.98, r = 0.92-0.97, MAE = 1.35-1.84, RMSE = 1.82-2.51), while with statistically significant difference for LSLA (p = 0.02). CONCLUSIONS: The presented model revealed clinically equivalent measurements in terms of accuracy, while superior measurements were obtained in terms of cost-effectiveness, reliability, and reproducibility. The model may help clinicians improve their understanding and evaluation of lumbar diseases and LBP from a quantitative perspective in practical work. (ChiCTR2100048250).


Subject(s)
Artificial Intelligence , Technology , Humans , Lumbar Vertebrae/diagnostic imaging , Radiography , Reproducibility of Results , X-Rays
10.
Front Oncol ; 11: 737302, 2021.
Article in English | MEDLINE | ID: mdl-34950578

ABSTRACT

We aimed to build radiomics models based on triple-phase CT images combining clinical features to predict the risk rating of gastrointestinal stromal tumors (GISTs). A total of 231 patients with pathologically diagnosed GISTs from July 2012 to July 2020 were categorized into a training data set (82 patients with high risk, 80 patients with low risk) and a validation data set (35 patients with high risk, 34 patients with low risk) with a ratio of 7:3. Four diagnostic models were constructed by assessing 20 clinical characteristics and 18 radiomic features that were extracted from a lesion mask based on triple-phase CT images. The receiver operating characteristic (ROC) curves were applied to calculate the diagnostic performance of these models, and ROC curves of these models were compared using Delong test in different data sets. The results of ROC analyses showed that areas under ROC curves (AUC) of model 4 [Clinic + CT value of unenhanced (CTU) + CT value of arterial phase (CTA) + value of venous phase (CTV)], model 1 (Clinic + CTU), model 2 (Clinic + CTA), and model 3 (Clinic + CTV) were 0.925, 0.894, 0.909, and 0.914 in the training set and 0.897, 0.866, 0,892, and 0.892 in the validation set, respectively. Model 4, model 1, model 2, and model 3 yielded an accuracy of 88.3%, 85.8%, 86.4%, and 84.6%, a sensitivity of 85.4%, 84.2%, 76.8%, and 78.0%, and a specificity of 91.2%, 87.5%, 96.2%, and 91.2% in the training set and an accuracy of 88.4%, 84.1%, 82.6%, and 82.6%, a sensitivity of 88.6%, 77.1%, 74.3%, and 85.7%, and a specificity of 88.2%, 91.2%, 91.2%, and 79.4% in the validation set, respectively. There was a significant difference between model 4 and model 1 in discriminating the risk rating in gastrointestinal stromal tumors in the training data set (Delong test, p < 0.05). The radiomic models based on clinical features and triple-phase CT images manifested excellent accuracy for the discrimination of risk rating of GISTs.

11.
Front Oncol ; 11: 654413, 2021.
Article in English | MEDLINE | ID: mdl-34249691

ABSTRACT

OBJECT: STAS is associated with poor differentiation, KRAS mutation and poor recurrence-free survival. The aims of this study are to evaluate the ability of intra- and perinodular radiomic features to distinguish STAS at non-contrast CT. PATIENTS AND METHODS: This retrospective study included 216 patients with pathologically confirmed lung adenocarcinoma (STAS+, n = 56; STAS-, n = 160). Texture-based features were extracted from intra- and perinodular regions of 2, 4, 6, 8, 10, and 20 mm distances from the tumor edge using an erosion and expansion algorithm. Traditional radiologic features were also analyzed including size, consolidation tumor ratio (CTR), density, shape, vascular change, cystic airspaces, tumor-lung interface, lobulation, spiculation, and satellite sign. Nine radiomic models were established by using the eight separate models and a total of the eight VOIs (eight-VOI model). Then the prediction efficiencies of the nine radiomic models were compared to predict STAS of lung adenocarcinomas. RESULTS: Among the traditional radiologic features, CTR, unclear tumor-lung interface, and satellite sign were found to be associated with STAS significantly, and the AUCs were 0.796, 0.677, and 0.606, respectively. Radiomic model of combined tumor bodies and all the distances of perinodular areas (eight-VOI model) had better predictive efficiency for predicting STAS+ lung adenocarcinoma. The AUCs of the eight-VOI model in the training and verification sets were 0.907 (95%CI, 0.862-0.947) in the training set, and 0.897 (95%CI, 0.784-0.985) in the testing set, and 0.909 (95%CI, 0.863-0.949) in the external validation set, and the diagnostic accuracy in the external validation set was 0.849. CONCLUSION: Radiomic features from intra- and perinodular regions of nodules can best distinguish STAS of lung adenocarcinoma.

12.
Abdom Radiol (NY) ; 46(5): 1773-1782, 2021 05.
Article in English | MEDLINE | ID: mdl-33083871

ABSTRACT

OBJECTIVE: To identify schwannomas from gastrointestinal stromal tumors (GISTs) by CT features using Logistic Regression (LR), Decision Trees (DT), Random Forest (RF), and Gradient Boosting Decision Tree (GBDT). METHODS: This study enrolled 49 patients with schwannomas and 139 with GISTs proven by pathology. CT features with P < 0.1 derived from univariate analysis were inputted to four models. Five machine learning (ML) versions, multivariate analysis, and radiologists' subjective diagnostic performance were compared to evaluate diagnosis performance of all the traditional and advanced methods. RESULTS: The CT features with P < 0.1 were as follows: (1) CT attenuation value of unenhancement phase (CTU), (2) portal venous enhancement (CTV), (3) degree of enhancement in the portal venous phase (DEPP), (4) CT attenuation value of portal venous phase minus arterial phase (CTV-CTA), (5) enhanced potentiality (EP), (6) location, (7) contour, (8) growth pattern, (9) necrosis, (10) surface ulceration, (11) enlarged lymph node (LN). LR (M1), RF, DT, and GBDT models contained all of the above 11 variables, while LR (M2) was developed using six most predictive variables derived from (M1). LR (M2) model with AUC of 0.967 in test dataset was thought to be optimal model in differentiating the two tumors. Location in gastric body, exophytic and mixed growth pattern, lack of necrosis and surface ulceration, enlarged lymph nodes, and larger EP were the most important CT features suggestive of schwannomas. CONCLUSION: LR (M2) provided the optimal diagnostic potency among other ML versions, multivariate analysis, and radiologists' performance on differentiation of schwannomas from GISTs.


Subject(s)
Gastrointestinal Stromal Tumors , Neurilemmoma , Stomach Neoplasms , Gastrointestinal Stromal Tumors/diagnostic imaging , Humans , Machine Learning , Neurilemmoma/diagnostic imaging , Retrospective Studies , Stomach Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
13.
Med Phys ; 48(1): 169-177, 2021 Jan.
Article in English | MEDLINE | ID: mdl-32974920

ABSTRACT

PURPOSE: Recently, brain tumor segmentation has made important progress. However, the quality of manual labels plays an important role in the performance, while in practice, it could vary greatly and in turn could substantially mislead the learning process and decrease the accuracy. We need to design a mechanism to combine label correction and sample reweighting to improve the effectiveness of brain tumor segmentation. METHODS: We propose a novel sample reweighting and label refinement method, and a novel three-dimensional (3D) generative adversarial network (GAN) is introduced to combine these two models into an united framework. RESULTS: Extensive experiments on the BraTS19 dataset have demonstrated that our approach obtains competitive results when compared with other state-of-the-art approaches when handling the false labels in brain tumor segmentation. CONCLUSIONS: The 3D GAN-based approach is an effective approach to handle false label masks by simultaneously applying label correction and sample reweighting. Our method is robust to variations in tumor shape and background clutter.


Subject(s)
Brain Neoplasms , Image Processing, Computer-Assisted , Brain Neoplasms/diagnostic imaging , Humans
14.
Eur J Radiol ; 132: 109303, 2020 Nov.
Article in English | MEDLINE | ID: mdl-33017773

ABSTRACT

PURPOSE: To develop and evaluate an automatic measurement model for hip joints based on anteroposterior (AP) pelvic radiography and a deep learning algorithm. METHODS: A total of 1260 AP pelvic radiographs were included. 1060 radiographs were randomly sampled for training and validation and 200 radiographs were used as the test set. Landmarks for four commonly used parameters, such as the center-edge (CE) angle of Wiberg, Tönnis angle, sharp angle, and femoral head extrusion index (FHEI), were identified and labeled. An encoder-decoder convolutional neural network was developed to output a multi-channel heat map. Measurements were obtained through landmarks on the test set. Right and left hips were analyzed respectively. The mean of each parameter obtained by three radiologists was used as the reference standard. The Percentage of Correct Key points (PCK), intraclass correlation coefficient (ICC), Pearson correlation coefficient (r), root mean square error (RMSE), mean absolute error (MAE), and Bland-Altman plots were used to determine the performance of deep learning algorithm. RESULTS: PCK of the model at 3 mm distance threshold range was from 87 % to 100 %. The CE angle, Tönnis angle, Sharp angle and FHEI of the left hip generated by the model were 29.8°±6.1°, 5.6°±4.2°, 39.0°±3.5° and 19 %±5 %, respectively. The parameters of the right hip were 30.4°±6.1°, 7.1°±4.4°, 38.9°±3.7° and 18 %±5 %. There were good correlation and consistency of the four parameters between the model and the reference standard (ICC 0.83-0.93, r 0.83-0.93, RMSE 0.02-3.27, MAE 0.02-1.79). CONCLUSIONS: The new developed model based on deep learning algorithm can accurately identify landmarks on AP pelvic radiography and automatically generate parameters of hip joint. It will provide convenience for clinical practice of measurement.


Subject(s)
Deep Learning , Algorithms , Feasibility Studies , Hip Joint/diagnostic imaging , Humans , Radiography
15.
Eur Radiol ; 30(9): 4974-4984, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32328760

ABSTRACT

OBJECTIVES: To develop and evaluate the performance of a deep learning-based system for automatic patellar height measurements using knee radiographs. METHODS: The deep learning-based algorithm was developed with a data set consisting of 1018 left knee radiographs for the prediction of patellar height parameters, specifically the Insall-Salvati index (ISI), Caton-Deschamps index (CDI), modified Caton-Deschamps index (MCDI), and Keerati index (KI). The performance and generalizability of the algorithm were tested with 200 left knee and 200 right knee radiographs, respectively. The intra-class correlation coefficient (ICC), Pearson correlation coefficient, mean absolute difference (MAD), root mean square (RMS), and Bland-Altman plots for predictions by the system were evaluated in comparison with manual measurements as the reference standard. RESULTS: Compared with the reference standard, the deep learning-based algorithm showed high accuracy in predicting the ISI, CDI, and KI (left knee ICC = 0.91-0.95, r = 0.84-0.91, MAD = 0.02-0.05, RMS = 0.02-0.07; right knee ICC = 0.87-0.96, r = 0.78-0.92, MAD = 0.02-0.06, RMS = 0.02-0.10), but not the MCDI (left knee ICC = 0.65, r = 0.50, MAD = 0.14, RMS = 0.18; right knee ICC = 0.62, r = 0.47, MAD = 0.15, RMS = 0.20). The performance of the algorithm met or exceeded that of manual determination of ISI, CDI, and KI by radiologists. CONCLUSIONS: In its current state, the developed system can predict the ISI, CDI, and KI for both left and right knee radiographs as accurately as radiologists. Training the system further with more data would increase its utility in helping radiologists measure patellar height in clinical practice. KEY POINTS: • Objective and reliable measurement of patellar height parameters is important for clinical diagnosis and the development of a treatment strategy. • Deep learning can be used to create an automatic patellar height measurement system based on knee radiographs. • The deep learning-based patellar height measurement system achieves comparable performance to radiologists in measuring ISI, CDI, and KI.


Subject(s)
Algorithms , Deep Learning , Knee Joint/diagnostic imaging , Osteoarthritis, Knee/diagnosis , Patella/diagnostic imaging , Radiography/methods , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...