Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 47
1.
Arch Iran Med ; 27(4): 183-190, 2024 Apr 01.
Article En | MEDLINE | ID: mdl-38685844

BACKGROUND: Data on the epidemiology of inflammatory bowel disease (IBD) in the Middle East are scarce. We aimed to describe the clinical phenotype, disease course, and medication usage of IBD cases from Iran in the Middle East. METHODS: We conducted a cross-sectional study of registered IBD patients in the Iranian Registry of Crohn's and Colitis (IRCC) from 2017 until 2022. We collected information on demographic characteristics, past medical history, family history, disease extent and location, extra-intestinal manifestations, IBD medications, and activity using the IBD-control-8 questionnaire and the Manitoba IBD index, admissions history, history of colon cancer, and IBD-related surgeries. RESULTS: In total, 9746 patients with ulcerative colitis (UC) (n=7793), and Crohn's disease (CD) (n=1953) were reported. The UC to CD ratio was 3.99. The median age at diagnosis was 29.2 (IQR: 22.6,37.6) and 27.6 (IQR: 20.6,37.6) for patients with UC and CD, respectively. The male-to-female ratio was 1.28 in CD patients. A positive family history was observed in 17.9% of UC patients. The majority of UC patients had pancolitis (47%). Ileocolonic involvement was the most common type of involvement in CD patients (43.7%), and the prevalence of stricturing behavior was 4.6%. A prevalence of 0.3% was observed for colorectal cancer among patients with UC. Moreover,15.2% of UC patients and 38.4% of CD patients had been treated with anti-tumor necrosis factor (anti-TNF). CONCLUSION: In this national registry-based study, there are significant differences in some clinical phenotypes such as the prevalence of extra-intestinal manifestations and treatment strategies such as biological use in different geographical locations.


Colitis, Ulcerative , Crohn Disease , Phenotype , Registries , Humans , Iran/epidemiology , Male , Female , Cross-Sectional Studies , Adult , Crohn Disease/epidemiology , Colitis, Ulcerative/epidemiology , Young Adult , Middle Aged , Adolescent
2.
J Imaging Inform Med ; 2024 Apr 01.
Article En | MEDLINE | ID: mdl-38558368

In recent years, the role of Artificial Intelligence (AI) in medical imaging has become increasingly prominent, with the majority of AI applications approved by the FDA being in imaging and radiology in 2023. The surge in AI model development to tackle clinical challenges underscores the necessity for preparing high-quality medical imaging data. Proper data preparation is crucial as it fosters the creation of standardized and reproducible AI models while minimizing biases. Data curation transforms raw data into a valuable, organized, and dependable resource and is a fundamental process to the success of machine learning and analytical projects. Considering the plethora of available tools for data curation in different stages, it is crucial to stay informed about the most relevant tools within specific research areas. In the current work, we propose a descriptive outline for different steps of data curation while we furnish compilations of tools collected from a survey applied among members of the Society of Imaging Informatics (SIIM) for each of these stages. This collection has the potential to enhance the decision-making process for researchers as they select the most appropriate tool for their specific tasks.

3.
J Imaging Inform Med ; 2024 Mar 14.
Article En | MEDLINE | ID: mdl-38483694

The application of deep learning (DL) in medicine introduces transformative tools with the potential to enhance prognosis, diagnosis, and treatment planning. However, ensuring transparent documentation is essential for researchers to enhance reproducibility and refine techniques. Our study addresses the unique challenges presented by DL in medical imaging by developing a comprehensive checklist using the Delphi method to enhance reproducibility and reliability in this dynamic field. We compiled a preliminary checklist based on a comprehensive review of existing checklists and relevant literature. A panel of 11 experts in medical imaging and DL assessed these items using Likert scales, with two survey rounds to refine responses and gauge consensus. We also employed the content validity ratio with a cutoff of 0.59 to determine item face and content validity. Round 1 included a 27-item questionnaire, with 12 items demonstrating high consensus for face and content validity that were then left out of round 2. Round 2 involved refining the checklist, resulting in an additional 17 items. In the last round, 3 items were deemed non-essential or infeasible, while 2 newly suggested items received unanimous agreement for inclusion, resulting in a final 26-item DL model reporting checklist derived from the Delphi process. The 26-item checklist facilitates the reproducible reporting of DL tools and enables scientists to replicate the study's results.

4.
Med Phys ; 2024 Feb 09.
Article En | MEDLINE | ID: mdl-38335175

BACKGROUND: Notwithstanding the encouraging results of previous studies reporting on the efficiency of deep learning (DL) in COVID-19 prognostication, clinical adoption of the developed methodology still needs to be improved. To overcome this limitation, we set out to predict the prognosis of a large multi-institutional cohort of patients with COVID-19 using a DL-based model. PURPOSE: This study aimed to evaluate the performance of deep privacy-preserving federated learning (DPFL) in predicting COVID-19 outcomes using chest CT images. METHODS: After applying inclusion and exclusion criteria, 3055 patients from 19 centers, including 1599 alive and 1456 deceased, were enrolled in this study. Data from all centers were split (randomly with stratification respective to each center and class) into a training/validation set (70%/10%) and a hold-out test set (20%). For the DL model, feature extraction was performed on 2D slices, and averaging was performed at the final layer to construct a 3D model for each scan. The DensNet model was used for feature extraction. The model was developed using centralized and FL approaches. For FL, we employed DPFL approaches. Membership inference attack was also evaluated in the FL strategy. For model evaluation, different metrics were reported in the hold-out test sets. In addition, models trained in two scenarios, centralized and FL, were compared using the DeLong test for statistical differences. RESULTS: The centralized model achieved an accuracy of 0.76, while the DPFL model had an accuracy of 0.75. Both the centralized and DPFL models achieved a specificity of 0.77. The centralized model achieved a sensitivity of 0.74, while the DPFL model had a sensitivity of 0.73. A mean AUC of 0.82 and 0.81 with 95% confidence intervals of (95% CI: 0.79-0.85) and (95% CI: 0.77-0.84) were achieved by the centralized model and the DPFL model, respectively. The DeLong test did not prove statistically significant differences between the two models (p-value = 0.98). The AUC values for the inference attacks fluctuate between 0.49 and 0.51, with an average of 0.50 ± 0.003 and 95% CI for the mean AUC of 0.500 to 0.501. CONCLUSION: The performance of the proposed model was comparable to centralized models while operating on large and heterogeneous multi-institutional datasets. In addition, the model was resistant to inference attacks, ensuring the privacy of shared data during the training process.

5.
Radiology ; 310(1): e230242, 2024 Jan.
Article En | MEDLINE | ID: mdl-38165243

A Food and Drug Administration (FDA)-cleared artificial intelligence (AI) algorithm misdiagnosed a finding as an intracranial hemorrhage in a patient, who was finally diagnosed with an ischemic stroke. This scenario highlights a notable failure mode of AI tools, emphasizing the importance of human-machine interaction. In this report, the authors summarize the review processes by the FDA for software as a medical device and the unique regulatory designs for radiologic AI/machine learning algorithms to ensure their safety in clinical practice. Then the challenges in maximizing the efficacy of these tools posed by their clinical implementation are discussed.


Algorithms , Artificial Intelligence , United States , Humans , United States Food and Drug Administration , Software , Machine Learning
6.
J Arthroplasty ; 39(3): 727-733.e4, 2024 Mar.
Article En | MEDLINE | ID: mdl-37619804

BACKGROUND: This study introduces THA-Net, a deep learning inpainting algorithm for simulating postoperative total hip arthroplasty (THA) radiographs from a single preoperative pelvis radiograph input, while being able to generate predictions either unconditionally (algorithm chooses implants) or conditionally (surgeon chooses implants). METHODS: The THA-Net is a deep learning algorithm which receives an input preoperative radiograph and subsequently replaces the target hip joint with THA implants to generate a synthetic yet realistic postoperative radiograph. We trained THA-Net on 356,305 pairs of radiographs from 14,357 patients from a single institution's total joint registry and evaluated the validity (quality of surgical execution) and realism (ability to differentiate real and synthetic radiographs) of its outputs against both human-based and software-based criteria. RESULTS: The surgical validity of synthetic postoperative radiographs was significantly higher than their real counterparts (mean difference: 0.8 to 1.1 points on 10-point Likert scale, P < .001), but they were not able to be differentiated in terms of realism in blinded expert review. Synthetic images showed excellent validity and realism when analyzed with already validated deep learning models. CONCLUSION: We developed a THA next-generation templating tool that can generate synthetic radiographs graded higher on ultimate surgical execution than real radiographs from training data. Further refinement of this tool may potentiate patient-specific surgical planning and enable technologies such as robotics, navigation, and augmented reality (an online demo of THA-Net is available at: https://demo.osail.ai/tha_net).


Arthroplasty, Replacement, Hip , Deep Learning , Hip Prosthesis , Humans , Arthroplasty, Replacement, Hip/methods , Hip Joint/diagnostic imaging , Hip Joint/surgery , Radiography , Retrospective Studies
7.
J Arthroplasty ; 39(4): 966-973.e17, 2024 Apr.
Article En | MEDLINE | ID: mdl-37770007

BACKGROUND: Revision total hip arthroplasty (THA) requires preoperatively identifying in situ implants, a time-consuming and sometimes unachievable task. Although deep learning (DL) tools have been attempted to automate this process, existing approaches are limited by classifying few femoral and zero acetabular components, only classify on anterior-posterior (AP) radiographs, and do not report prediction uncertainty or flag outlier data. METHODS: This study introduces Total Hip Arhtroplasty Automated Implant Detector (THA-AID), a DL tool trained on 241,419 radiographs that identifies common designs of 20 femoral and 8 acetabular components from AP, lateral, or oblique views and reports prediction uncertainty using conformal prediction and outlier detection using a custom framework. We evaluated THA-AID using internal, external, and out-of-domain test sets and compared its performance with human experts. RESULTS: THA-AID achieved internal test set accuracies of 98.9% for both femoral and acetabular components with no significant differences based on radiographic view. The femoral classifier also achieved 97.0% accuracy on the external test set. Adding conformal prediction increased true label prediction by 0.1% for acetabular and 0.7 to 0.9% for femoral components. More than 99% of out-of-domain and >89% of in-domain outlier data were correctly identified by THA-AID. CONCLUSIONS: The THA-AID is an automated tool for implant identification from radiographs with exceptional performance on internal and external test sets and no decrement in performance based on radiographic view. Importantly, this is the first study in orthopedics to our knowledge including uncertainty quantification and outlier detection of a DL model.


Arthroplasty, Replacement, Hip , Deep Learning , Hip Prosthesis , Humans , Uncertainty , Acetabulum/surgery , Retrospective Studies
8.
Radiol Artif Intell ; 5(6): e230085, 2023 Nov.
Article En | MEDLINE | ID: mdl-38074777

Radiographic markers contain protected health information that must be removed before public release. This work presents a deep learning algorithm that localizes radiographic markers and selectively removes them to enable de-identified data sharing. The authors annotated 2000 hip and pelvic radiographs to train an object detection computer vision model. Data were split into training, validation, and test sets at the patient level. Extracted markers were then characterized using an image processing algorithm, and potentially useful markers (eg, "L" and "R") without identifying information were retained. The model achieved an area under the precision-recall curve of 0.96 on the internal test set. The de-identification accuracy was 100% (400 of 400), with a de-identification false-positive rate of 1% (eight of 632) and a retention accuracy of 93% (359 of 386) for laterality markers. The algorithm was further validated on an external dataset of chest radiographs, achieving a de-identification accuracy of 96% (221 of 231). After fine-tuning the model on 20 images from the external dataset to investigate the potential for improvement, a 99.6% (230 of 231, P = .04) de-identification accuracy and decreased false-positive rate of 5% (26 of 512) were achieved. These results demonstrate the effectiveness of a two-pass approach in image de-identification. Keywords: Conventional Radiography, Skeletal-Axial, Thorax, Experimental Investigations, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2023 See also the commentary by Chang and Li in this issue.

9.
Orthop J Sports Med ; 11(12): 23259671231215820, 2023 Dec.
Article En | MEDLINE | ID: mdl-38107846

Background: An increased posterior tibial slope (PTS) corresponds with an increased risk of graft failure after anterior cruciate ligament (ACL) reconstruction (ACLR). Validated methods of manual PTS measurements are subject to potential interobserver variability and can be inefficient on large datasets. Purpose/Hypothesis: To develop a deep learning artificial intelligence technique for automated PTS measurement from standard lateral knee radiographs. It was hypothesized that this deep learning tool would be able to measure the PTS on a high volume of radiographs expeditiously and that these measurements would be similar to previously validated manual measurements. Study Design: Cohort study (diagnosis); Level of evidence, 2. Methods: A deep learning U-Net model was developed on a cohort of 300 postoperative short-leg lateral radiographs from patients who underwent ACLR to segment the tibial shaft, tibial joint surface, and tibial tuberosity. The model was trained via a random split after an 80 to 20 train-validation scheme. Masks for training images were manually segmented, and the model was trained for 400 epochs. An image processing pipeline was then deployed to annotate and measure the PTS using the predicted segmentation masks. Finally, the performance of this combined pipeline was compared with human measurements performed by 2 study personnel using a previously validated manual technique for measuring the PTS on short-leg lateral radiographs on an independent test set consisting of both pre- and postoperative images. Results: The U-Net semantic segmentation model achieved a mean Dice similarity coefficient of 0.885 on the validation cohort. The mean difference between the human-made and computer-vision measurements was 1.92° (σ = 2.81° [P = .24]). Extreme disagreements between the human and machine measurements, as defined by ≥5° differences, occurred <5% of the time. The model was incorporated into a web-based digital application front-end for demonstration purposes, which can measure a single uploaded image in Portable Network Graphics format in a mean time of 5 seconds. Conclusion: We developed an efficient and reliable deep learning computer vision algorithm to automate the PTS measurement on short-leg lateral knee radiographs. This tool, which demonstrated good agreement with human annotations, represents an effective clinical adjunct for measuring the PTS as part of the preoperative assessment of patients with ACL injuries.

10.
Comput Methods Programs Biomed ; 242: 107832, 2023 Dec.
Article En | MEDLINE | ID: mdl-37778140

BACKGROUND: Medical image analysis pipelines often involve segmentation, which requires a large amount of annotated training data, which is time-consuming and costly. To address this issue, we proposed leveraging generative models to achieve few-shot image segmentation. METHODS: We trained a denoising diffusion probabilistic model (DDPM) on 480,407 pelvis radiographs to generate 256 âœ• 256 px synthetic images. The DDPM was conditioned on demographic and radiologic characteristics and was rigorously validated by domain experts and objective image quality metrics (Frechet inception distance [FID] and inception score [IS]). For the next step, three landmarks (greater trochanter [GT], lesser trochanter [LT], and obturator foramen [OF]) were annotated on 45 real-patient radiographs; 25 for training and 20 for testing. To extract features, each image was passed through the pre-trained DDPM at three timesteps and for each pass, features from specific blocks were extracted. The features were concatenated with the real image to form an image with 4225 channels. The feature-set was broken into random patches, which were fed to a U-Net. Dice Similarity Coefficient (DSC) was used to compare the performance with a vanilla U-Net trained on radiographs. RESULTS: Expert accuracy was 57.5 % in determining real versus generated images, while the model reached an FID = 7.2 and IS = 210. The segmentation UNet trained on the 20 feature-sets achieved a DSC of 0.90, 0.84, and 0.61 for OF, GT, and LT segmentation, respectively, which was at least 0.30 points higher than the naively trained model. CONCLUSION: We demonstrated the applicability of DDPMs as feature extractors, facilitating medical image segmentation with few annotated samples.


Benchmarking , Bisacodyl , Humans , Diffusion , Femur , Image Processing, Computer-Assisted
11.
J Arthroplasty ; 38(10): 1943-1947, 2023 10.
Article En | MEDLINE | ID: mdl-37598784

Electronic health records have facilitated the extraction and analysis of a vast amount of data with many variables for clinical care and research. Conventional regression-based statistical methods may not capture all the complexities in high-dimensional data analysis. Therefore, researchers are increasingly using machine learning (ML)-based methods to better handle these more challenging datasets for the discovery of hidden patterns in patients' data and for classification and predictive purposes. This article describes commonly used ML methods in structured data analysis with examples in orthopedic surgery. We present practical considerations in starting an ML project and appraising published studies in this field.


Electronic Health Records , Machine Learning , Humans
12.
J Arthroplasty ; 38(10): 1938-1942, 2023 10.
Article En | MEDLINE | ID: mdl-37598786

The growth of artificial intelligence combined with the collection and storage of large amounts of data in the electronic medical record collection has created an opportunity for orthopedic research and translation into the clinical environment. Machine learning (ML) is a type of artificial intelligence tool well suited for processing the large amount of available data. Specific areas of ML frequently used by orthopedic surgeons performing total joint arthroplasty include tabular data analysis (spreadsheets), medical imaging processing, and natural language processing (extracting concepts from text). Previous studies have discussed models able to identify fractures in radiographs, identify implant type in radiographs, and determine the stage of osteoarthritis based on walking analysis. Despite the growing popularity of ML, there are limitations including its reliance on "good" data, potential for overfitting, long life cycle for creation, and ability to only perform one narrow task. This educational article will further discuss a general overview of ML, discussing these challenges and including examples of successfully published models.


Orthopedic Procedures , Orthopedics , Humans , Artificial Intelligence , Machine Learning , Natural Language Processing
13.
Radiology ; 308(2): e222217, 2023 08.
Article En | MEDLINE | ID: mdl-37526541

In recent years, deep learning (DL) has shown impressive performance in radiologic image analysis. However, for a DL model to be useful in a real-world setting, its confidence in a prediction must also be known. Each DL model's output has an estimated probability, and these estimated probabilities are not always reliable. Uncertainty represents the trustworthiness (validity) of estimated probabilities. The higher the uncertainty, the lower the validity. Uncertainty quantification (UQ) methods determine the uncertainty level of each prediction. Predictions made without UQ methods are generally not trustworthy. By implementing UQ in medical DL models, users can be alerted when a model does not have enough information to make a confident decision. Consequently, a medical expert could reevaluate the uncertain cases, which would eventually lead to gaining more trust when using a model. This review focuses on recent trends using UQ methods in DL radiologic image analysis within a conceptual framework. Also discussed in this review are potential applications, challenges, and future directions of UQ in DL radiologic image analysis.


Deep Learning , Radiology , Humans , Uncertainty , Image Processing, Computer-Assisted
14.
J Arthroplasty ; 38(10): 1954-1958, 2023 10.
Article En | MEDLINE | ID: mdl-37633507

Image data has grown exponentially as systems have increased their ability to collect and store it. Unfortunately, there are limits to human resources both in time and knowledge to fully interpret and manage that data. Computer Vision (CV) has grown in popularity as a discipline for better understanding visual data. Computer Vision has become a powerful tool for imaging analytics in orthopedic surgery, allowing computers to evaluate large volumes of image data with greater nuance than previously possible. Nevertheless, even with the growing number of uses in medicine, literature on the fundamentals of CV and its implementation is mainly oriented toward computer scientists rather than clinicians, rendering CV unapproachable for most orthopedic surgeons as a tool for clinical practice and research. The purpose of this article is to summarize and review the fundamental concepts of CV application for the orthopedic surgeon and musculoskeletal researcher.


Orthopedic Procedures , Orthopedics , Humans , Arthroplasty , Computers
15.
Article En | MEDLINE | ID: mdl-37488326

Few studies have engaged in data-driven investigations of the presence, or frequency, of what could be considered retaliatory assessor behaviour in Multi-source Feedback (MSF) systems. In this study, authors explored how assessors scored others if, before assessing others, they received their own assessment score. The authors examined assessments from an established MSF system in which all clinical team members - medical students, interns, residents, fellows, and supervisors - anonymously assessed each other. The authors identified assessments in which an assessor (i.e., any team member providing a score to another) gave an aberrant score to another individual. An aberrant score was defined as one that was more than two standard deviations from the assessment receiver's average score. Assessors who gave aberrant scores were categorized according to whether their behaviour was preceded by: (1) receiving a score or not from another individual in the MSF system (2) whether the score they received was aberrant or not. The authors used a multivariable logistic regression model to investigate the association between the type of score received and the type of score given by that same individual. In total, 367 unique assessors provided 6091 scores on the performance of 484 unique individuals. Aberrant scores were identified in 250 forms (4.1%). The chances of giving an aberrant score were 2.3 times higher for those who had received a score, compared to those who had not (odds ratio 2.30, 95% CI:1.54-3.44, P < 0.001). Individuals who had received an aberrant score were also 2.17 times more likely to give an aberrant score to others compared to those who had received a non-aberrant score (2.17, 95% CI:1.39-3.39, P < 0.005) after adjusting for all other variables. This study documents an association between receiving scores within an anonymous multi-source feedback (MSF) system and providing aberrant scores to team members. These findings suggest care must be given to designing MSF systems to protect against potential downstream consequences of providing and receiving anonymous feedback.

16.
J Arthroplasty ; 38(10): 2024-2031.e1, 2023 10.
Article En | MEDLINE | ID: mdl-37236288

BACKGROUND: Automatic methods for labeling and segmenting pelvis structures can improve the efficiency of clinical and research workflows and reduce the variability introduced with manual labeling. The purpose of this study was to develop a single deep learning model to annotate certain anatomical structures and landmarks on antero-posterior (AP) pelvis radiographs. METHODS: A total of 1,100 AP pelvis radiographs were manually annotated by 3 reviewers. These images included a mix of preoperative and postoperative images as well as a mix of AP pelvis and hip images. A convolutional neural network was trained to segment 22 different structures (7 points, 6 lines, and 9 shapes). Dice score, which measures overlap between model output and ground truth, was calculated for the shapes and lines structures. Euclidean distance error was calculated for point structures. RESULTS: Dice score averaged across all images in the test set was 0.88 and 0.80 for the shape and line structures, respectively. For the 7-point structures, average distance between real and automated annotations ranged from 1.9 mm to 5.6 mm, with all averages falling below 3.1 mm except for the structure labeling the center of the sacrococcygeal junction, where performance was low for both human and machine-produced labels. Blinded qualitative evaluation of human and machine produced segmentations did not reveal any drastic decrease in performance of the automatic method. CONCLUSION: We present a deep learning model for automated annotation of pelvis radiographs that flexibly handles a variety of views, contrasts, and operative statuses for 22 structures and landmarks.


Deep Learning , Humans , Radiography , Neural Networks, Computer , Pelvis/diagnostic imaging , Postoperative Period
18.
J Arthroplasty ; 38(7S): S2-S10, 2023 07.
Article En | MEDLINE | ID: mdl-36933678

BACKGROUND: Many risk factors have been described for periprosthetic femur fracture (PPFFx) following total hip arthroplasty (THA), yet a patient-specific risk assessment tool remains elusive. The purpose of this study was to develop a high-dimensional, patient-specific risk-stratification nomogram that allows dynamic risk modification based on operative decisions. METHODS: We evaluated 16,696 primary nononcologic THAs performed between 1998 and 2018. During a mean 6-year follow-up, 558 patients (3.3%) sustained a PPFFx. Patients were characterized by individual natural language processing-assisted chart review on nonmodifiable factors (demographics, THA indication, and comorbidities), and modifiable operative decisions (femoral fixation [cemented/uncemented], surgical approach [direct anterior, lateral, and posterior], and implant type [collared/collarless]). Multivariable Cox regression models and nomograms were developed with PPFFx as a binary outcome at 90 days, 1 year, and 5 years, postoperatively. RESULTS: Patient-specific PPFFx risk based on comorbid profile was wide-ranging from 0.4-18% at 90 days, 0.4%-20% at 1 year, and 0.5%-25% at 5 years. Among 18 evaluated patient factors, 7 were retained in multivariable analyses. The 4 significant nonmodifiable factors included the following: women (hazard ratio (HR) = 1.6), older age (HR = 1.2 per 10 years), diagnosis of osteoporosis or use of osteoporosis medications (HR = 1.7), and indication for surgery other than osteoarthritis (HR = 2.2 for fracture, HR = 1.8 for inflammatory arthritis, HR = 1.7 for osteonecrosis). The 3 modifiable surgical factors were included as follows: uncemented femoral fixation (HR = 2.5), collarless femoral implants (HR = 1.3), and surgical approach other than direct anterior (lateral HR = 2.9, posterior HR = 1.9). CONCLUSION: This patient-specific PPFFx risk calculator demonstrated a wide-ranging risk based on comorbid profile and enables surgeons to quantify risk mitigation based on operative decisions. LEVEL OF EVIDENCE: Level III, Prognostic.


Arthroplasty, Replacement, Hip , Awards and Prizes , Femoral Fractures , Hip Prosthesis , Periprosthetic Fractures , Humans , Female , Arthroplasty, Replacement, Hip/adverse effects , Arthroplasty, Replacement, Hip/methods , Periprosthetic Fractures/epidemiology , Periprosthetic Fractures/etiology , Periprosthetic Fractures/surgery , Hip Prosthesis/adverse effects , Reoperation , Femoral Fractures/epidemiology , Femoral Fractures/etiology , Femoral Fractures/surgery , Risk Factors , Retrospective Studies
19.
J Digit Imaging ; 36(3): 837-846, 2023 06.
Article En | MEDLINE | ID: mdl-36604366

Glioblastoma (GBM) is the most common primary malignant brain tumor in adults. The standard treatment for GBM consists of surgical resection followed by concurrent chemoradiotherapy and adjuvant temozolomide. O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is an important prognostic biomarker that predicts the response to temozolomide and guides treatment decisions. At present, the only reliable way to determine MGMT promoter methylation status is through the analysis of tumor tissues. Considering the complications of the tissue-based methods, an imaging-based approach is preferred. This study aimed to compare three different deep learning-based approaches for predicting MGMT promoter methylation status. We obtained 576 T2WI with their corresponding tumor masks, and MGMT promoter methylation status from, The Brain Tumor Segmentation (BraTS) 2021 datasets. We developed three different models: voxel-wise, slice-wise, and whole-brain. For voxel-wise classification, methylated and unmethylated MGMT tumor masks were made into 1 and 2 with 0 background, respectively. We converted each T2WI into 32 × 32 × 32 patches. We trained a 3D-Vnet model for tumor segmentation. After inference, we constructed the whole brain volume based on the patch's coordinates. The final prediction of MGMT methylation status was made by majority voting between the predicted voxel values of the biggest connected component. For slice-wise classification, we trained an object detection model for tumor detection and MGMT methylation status prediction, then for final prediction, we used majority voting. For the whole-brain approach, we trained a 3D Densenet121 for prediction. Whole-brain, slice-wise, and voxel-wise, accuracy was 65.42% (SD 3.97%), 61.37% (SD 1.48%), and 56.84% (SD 4.38%), respectively.


Brain Neoplasms , Deep Learning , Glioblastoma , Adult , Humans , Glioblastoma/diagnostic imaging , Glioblastoma/genetics , Glioblastoma/pathology , Temozolomide/therapeutic use , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/genetics , Brain Neoplasms/pathology , DNA Methylation , Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , O(6)-Methylguanine-DNA Methyltransferase/genetics , DNA Modification Methylases/genetics , Tumor Suppressor Proteins/genetics , DNA Repair Enzymes/genetics
20.
J Arthroplasty ; 38(10): 2037-2043.e1, 2023 10.
Article En | MEDLINE | ID: mdl-36535448

BACKGROUND: In this work, we applied and validated an artificial intelligence technique known as generative adversarial networks (GANs) to create large volumes of high-fidelity synthetic anteroposterior (AP) pelvis radiographs that can enable deep learning (DL)-based image analyses, while ensuring patient privacy. METHODS: AP pelvis radiographs with native hips were gathered from an institutional registry between 1998 and 2018. The data was used to train a model to create 512 × 512 pixel synthetic AP pelvis images. The network was trained on 25 million images produced through augmentation. A set of 100 random images (50/50 real/synthetic) was evaluated by 3 orthopaedic surgeons and 2 radiologists to discern real versus synthetic images. Two models (joint localization and segmentation) were trained using synthetic images and tested on real images. RESULTS: The final model was trained on 37,640 real radiographs (16,782 patients). In a computer assessment of image fidelity, the final model achieved an "excellent" rating. In a blinded review of paired images (1 real, 1 synthetic), orthopaedic surgeon reviewers were unable to correctly identify which image was synthetic (accuracy = 55%, Kappa = 0.11), highlighting synthetic image fidelity. The synthetic and real images showed equivalent performance when they were assessed by established DL models. CONCLUSION: This work shows the ability to use a DL technique to generate a large volume of high-fidelity synthetic pelvis images not discernible from real imaging by computers or experts. These images can be used for cross-institutional sharing and model pretraining, further advancing the performance of DL models without risk to patient data safety. LEVEL OF EVIDENCE: Level III.


Deep Learning , Humans , Artificial Intelligence , Privacy , Image Processing, Computer-Assisted/methods , Pelvis/diagnostic imaging
...