Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
Vet Radiol Ultrasound ; 65(4): 417-428, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38668682

ABSTRACT

Thoracic radiographs are an essential diagnostic tool in companion animal medicine and are frequently used as a part of routine workups in patients presenting for coughing, respiratory distress, cardiovascular diseases, and for staging of neoplasia. Quality control is a critical aspect of radiology practice in preventing misdiagnosis and ensuring consistent, accurate, and reliable diagnostic imaging. Implementing an effective quality control procedure in radiology can impact patient outcomes, facilitate clinical decision-making, and decrease healthcare costs. In this study, a machine learning-based quality classification model is suggested for canine and feline thoracic radiographs captured in both ventrodorsal and dorsoventral positions. The problem of quality classification was divided into collimation, positioning, and exposure, and then an automatic classification method was proposed for each based on deep learning and machine learning. We utilized a dataset of 899 radiographs of dogs and cats. Evaluations using fivefold cross-validation resulted in an F1 score and AUC score of 91.33 (95% CI: 88.37-94.29) and 91.10 (95% CI: 88.16-94.03), respectively. Results indicated that the proposed automatic quality classification has the potential to be implemented in radiology clinics to improve radiograph quality and reduce nondiagnostic images.


Subject(s)
Cat Diseases , Machine Learning , Radiography, Thoracic , Animals , Cats , Dogs , Radiography, Thoracic/veterinary , Radiography, Thoracic/standards , Cat Diseases/diagnostic imaging , Quality Control , Dog Diseases/diagnostic imaging
2.
J Magn Reson Imaging ; 53(6): 1632-1645, 2021 06.
Article in English | MEDLINE | ID: mdl-32410356

ABSTRACT

Prostate MRI is reported in clinical practice using the Prostate Imaging and Data Reporting System (PI-RADS). PI-RADS aims to standardize, as much as possible, the acquisition, interpretation, reporting, and ultimately the performance of prostate MRI. PI-RADS relies upon mainly subjective analysis of MR imaging findings, with very few incorporated quantitative features. The shortcomings of PI-RADS are mainly: low-to-moderate interobserver agreement and modest accuracy for detection of clinically significant tumors in the transition zone. The use of a more quantitative analysis of prostate MR imaging findings is therefore of interest. Quantitative MR imaging features including: tumor size and volume, tumor length of capsular contact, tumor apparent diffusion coefficient (ADC) metrics, tumor T1 and T2 relaxation times, tumor shape, and texture analyses have all shown value for improving characterization of observations detected on prostate MRI and for differentiating between tumors by their pathological grade and stage. Quantitative analysis may therefore improve diagnostic accuracy for detection of cancer and could be a noninvasive means to predict patient prognosis and guide management. Since quantitative analysis of prostate MRI is less dependent on an individual users' assessment, it could also improve interobserver agreement. Semi- and fully automated analysis of quantitative (radiomic) MRI features using artificial neural networks represent the next step in quantitative prostate MRI and are now being actively studied. Validation, through high-quality multicenter studies assessing diagnostic accuracy for clinically significant prostate cancer detection, in the domain of quantitative prostate MRI is needed. This article reviews advances in quantitative prostate MRI, highlighting the strengths and limitations of existing and emerging techniques, as well as discussing opportunities and challenges for evaluation of prostate MRI in clinical practice when using quantitative assessment. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 2.


Subject(s)
Magnetic Resonance Imaging , Prostatic Neoplasms , Diffusion Magnetic Resonance Imaging , Humans , Male , Prostatic Neoplasms/diagnostic imaging , Retrospective Studies
3.
Article in English | MEDLINE | ID: mdl-39353461

ABSTRACT

BACKGROUND: The risk of biochemical recurrence (BCR) after radiotherapy for localized prostate cancer (PCa) varies widely within standard risk groups. There is a need for low-cost tools to more robustly predict recurrence and personalize therapy. Radiomic features from pretreatment MRI show potential as noninvasive biomarkers for BCR prediction. However, previous research has not fully combined radiomics with clinical and pathological data to predict BCR in PCa patients following radiotherapy. Purpose: This study aims to predict 5-year BCR using radiomics from pretreatment T2W MRI and clinical-pathological data in PCa patients treated with radiation therapy, and to develop a unified model compatible with both 1.5T and 3T MRI scanners. Methods: A total of 150 T2W scans and clinical parameters were preprocessed. Of these, 120 cases were used for training and validation, and 30 for testing. Four distinct machine learning models were developed: Model 1 used radiomics, Model 2 used clinical and pathological data, and Model 3 combined these using late fusion. Model 4 integrated radiomic and clinical-pathological data using early fusion. Results: Model 1 achieved an AUC of 0.73, while Model 2 had an AUC of 0.64 for predicting outcomes in 30 new test cases. Model 3, using late fusion, had an AUC of 0.69. Early fusion models showed strong potential, with Model 4 reaching an AUC of 0.84, highlighting the effectiveness of the early fusion model. Conclusions: This study is the first to use a fusion technique for predicting BCR in PCa patients following radiotherapy, utilizing pre-treatment T2W MRI images and clinical-pathological data. The methodology improves predictive accuracy by fusing radiomics with clinical-pathological information, even with a relatively small dataset, and introduces the first unified model for both 1.5T and 3T MRI images.

4.
J Med Imaging (Bellingham) ; 10(4): 044004, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37497375

ABSTRACT

Purpose: Thoracic radiographs are commonly used to evaluate patients with confirmed or suspected thoracic pathology. Proper patient positioning is more challenging in canine and feline radiography than in humans due to less patient cooperation and body shape variation. Improper patient positioning during radiograph acquisition has the potential to lead to a misdiagnosis. Asymmetrical hemithoraces are one of the indications of obliquity for which we propose an automatic classification method. Approach: We propose a hemithoraces segmentation method based on convolutional neural networks and active contours. We utilized the U-Net model to segment the ribs and spine and then utilized active contours to find left and right hemithoraces. We then extracted features from the left and right hemithoraces to train an ensemble classifier, which include support vector machine, gradient boosting, and multi-layer perceptron. Five-fold cross-validation was used, thorax segmentation was evaluated by intersection over union (IoU), and symmetry classification was evaluated using precision, recall, area under curve, and F1 score. Results: Classification of symmetry for 900 radiographs reported an F1 score of 82.8%. To test the robustness of the proposed thorax segmentation method to underexposure and overexposure, we synthetically corrupted properly exposed radiographs and evaluated results using IoU. The results showed that the model's IoU for underexposure and overexposure dropped by 2.1% and 1.2%, respectively. Conclusions: Our results indicate that the proposed thorax segmentation method is robust to poor exposure radiographs. The proposed thorax segmentation method can be applied to human radiography with minimal changes.

SELECTION OF CITATIONS
SEARCH DETAIL