Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Eur Radiol ; 33(7): 4589-4596, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36856841

RESUMO

OBJECTIVES: High breast density is a well-known risk factor for breast cancer. This study aimed to develop and adapt two (MLO, CC) deep convolutional neural networks (DCNN) for automatic breast density classification on synthetic 2D tomosynthesis reconstructions. METHODS: In total, 4605 synthetic 2D images (1665 patients, age: 57 ± 37 years) were labeled according to the ACR (American College of Radiology) density (A-D). Two DCNNs with 11 convolutional layers and 3 fully connected layers each, were trained with 70% of the data, whereas 20% was used for validation. The remaining 10% were used as a separate test dataset with 460 images (380 patients). All mammograms in the test dataset were read blinded by two radiologists (reader 1 with two and reader 2 with 11 years of dedicated mammographic experience in breast imaging), and the consensus was formed as the reference standard. The inter- and intra-reader reliabilities were assessed by calculating Cohen's kappa coefficients, and diagnostic accuracy measures of automated classification were evaluated. RESULTS: The two models for MLO and CC projections had a mean sensitivity of 80.4% (95%-CI 72.2-86.9), a specificity of 89.3% (95%-CI 85.4-92.3), and an accuracy of 89.6% (95%-CI 88.1-90.9) in the differentiation between ACR A/B and ACR C/D. DCNN versus human and inter-reader agreement were both "substantial" (Cohen's kappa: 0.61 versus 0.63). CONCLUSION: The DCNN allows accurate, standardized, and observer-independent classification of breast density based on the ACR BI-RADS system. KEY POINTS: • A DCNN performs on par with human experts in breast density assessment for synthetic 2D tomosynthesis reconstructions. • The proposed technique may be useful for accurate, standardized, and observer-independent breast density evaluation of tomosynthesis.


Assuntos
Densidade da Mama , Neoplasias da Mama , Humanos , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Idoso , Idoso de 80 Anos ou mais , Feminino , Variações Dependentes do Observador , Neoplasias da Mama/diagnóstico por imagem , Mamografia/métodos , Redes Neurais de Computação
2.
J Imaging ; 10(6)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38921624

RESUMO

BACKGROUND: After breast conserving surgery (BCS), surgical clips indicate the tumor bed and, thereby, the most probable area for tumor relapse. The aim of this study was to investigate whether a U-Net-based deep convolutional neural network (dCNN) may be used to detect surgical clips in follow-up mammograms after BCS. METHODS: 884 mammograms and 517 tomosynthetic images depicting surgical clips and calcifications were manually segmented and classified. A U-Net-based segmentation network was trained with 922 images and validated with 394 images. An external test dataset consisting of 39 images was annotated by two radiologists with up to 7 years of experience in breast imaging. The network's performance was compared to that of human readers using accuracy and interrater agreement (Cohen's Kappa). RESULTS: The overall classification accuracy on the validation set after 45 epochs ranged between 88.2% and 92.6%, indicating that the model's performance is comparable to the decisions of a human reader. In 17.4% of cases, calcifications have been misclassified as post-operative clips. The interrater reliability of the model compared to the radiologists showed substantial agreement (κreader1 = 0.72, κreader2 = 0.78) while the readers compared to each other revealed a Cohen's Kappa of 0.84, thus showing near-perfect agreement. CONCLUSIONS: With this study, we show that surgery clips can adequately be identified by an AI technique. A potential application of the proposed technique is patient triage as well as the automatic exclusion of post-operative cases from PGMI (Perfect, Good, Moderate, Inadequate) evaluation, thus improving the quality management workflow.

3.
Bioengineering (Basel) ; 11(6)2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38927793

RESUMO

In DCE-MRI, the degree of contrast uptake in normal fibroglandular tissue, i.e., background parenchymal enhancement (BPE), is a crucial biomarker linked to breast cancer risk and treatment outcome. In accordance with the Breast Imaging Reporting & Data System (BI-RADS), it should be visually classified into four classes. The susceptibility of such an assessment to inter-reader variability highlights the urgent need for a standardized classification algorithm. In this retrospective study, the first post-contrast subtraction images for 27 healthy female subjects were included. The BPE was classified slice-wise by two expert radiologists. The extraction of radiomic features from segmented BPE was followed by dataset splitting and dimensionality reduction. The latent representations were then utilized as inputs to a deep neural network classifying BPE into BI-RADS classes. The network's predictions were elucidated at the radiomic feature level with Shapley values. The deep neural network achieved a BPE classification accuracy of 84 ± 2% (p-value < 0.00001). Most of the misclassifications involved adjacent classes. Different radiomic features were decisive for the prediction of each BPE class underlying the complexity of the decision boundaries. A highly precise and explainable pipeline for BPE classification was achieved without user- or algorithm-dependent radiomic feature selection.

4.
Insights Imaging ; 14(1): 90, 2023 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-37199794

RESUMO

OBJECTIVES: The aim of this study was to develop and validate a commercially available AI platform for the automatic determination of image quality in mammography and tomosynthesis considering a standardized set of features. MATERIALS AND METHODS: In this retrospective study, 11,733 mammograms and synthetic 2D reconstructions from tomosynthesis of 4200 patients from two institutions were analyzed by assessing the presence of seven features which impact image quality in regard to breast positioning. Deep learning was applied to train five dCNN models on features detecting the presence of anatomical landmarks and three dCNN models for localization features. The validity of models was assessed by the calculation of the mean squared error in a test dataset and was compared to the reading by experienced radiologists. RESULTS: Accuracies of the dCNN models ranged between 93.0% for the nipple visualization and 98.5% for the depiction of the pectoralis muscle in the CC view. Calculations based on regression models allow for precise measurements of distances and angles of breast positioning on mammograms and synthetic 2D reconstructions from tomosynthesis. All models showed almost perfect agreement compared to human reading with Cohen's kappa scores above 0.9. CONCLUSIONS: An AI-based quality assessment system using a dCNN allows for precise, consistent and observer-independent rating of digital mammography and synthetic 2D reconstructions from tomosynthesis. Automation and standardization of quality assessment enable real-time feedback to technicians and radiologists that shall reduce a number of inadequate examinations according to PGMI (Perfect, Good, Moderate, Inadequate) criteria, reduce a number of recalls and provide a dependable training platform for inexperienced technicians.

5.
Clin Imaging ; 95: 28-36, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36603416

RESUMO

OBJECTIVE: In this study, we investigate the feasibility of a deep Convolutional Neural Network (dCNN), trained with mammographic images, to detect and classify microcalcifications (MC) in breast-CT (BCT) images. METHODS: This retrospective single-center study was approved by the local ethics committee. 3518 icons generated from 319 mammograms were classified into three classes: "no MC" (1121), "probably benign MC" (1332), and "suspicious MC" (1065). A dCNN was trained (70% of data), validated (20%), and tested on a "real-world" dataset (10%). The diagnostic performance of the dCNN was tested on a subset of 60 icons, generated from 30 mammograms and 30 breast-CT images, and compared to human reading. ROC analysis was used to calculate diagnostic performance. Moreover, colored probability maps for representative BCT images were calculated using a sliding-window approach. RESULTS: The dCNN reached an accuracy of 98.8% on the "real-world" dataset. The accuracy on the subset of 60 icons was 100% for mammographic images, 60% for "no MC", 80% for "probably benign MC" and 100% for "suspicious MC". Intra-class correlation between the dCNN and the readers was almost perfect (0.85). Kappa values between the two readers (0.93) and the dCNN were almost perfect (reader 1: 0.85 and reader 2: 0.82). The sliding-window approach successfully detected suspicious MC with high image quality. The diagnostic performance of the dCNN to classify benign and suspicious MC was excellent with an AUC of 93.8% (95% CI 87, 4%-100%). CONCLUSION: Deep convolutional networks can be used to detect and classify benign and suspicious MC in breast-CT images.


Assuntos
Doenças Mamárias , Redes Neurais de Computação , Humanos , Estudos Retrospectivos , Mamografia/métodos , Tomografia Computadorizada por Raios X , Curva ROC
6.
Clin Imaging ; 93: 93-102, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36423483

RESUMO

OBJECTIVES: In this retrospective, single-center study we investigate the changes of radiomics features during dynamic breast-MRI for healthy tissue compared to benign and malignant lesions. METHODS: 60 patients underwent breast-MRI using a dynamic 3D gradient-echo sequence. Changes of 34 texture features (TF) in 30 benign and 30 malignant lesions were calculated for 5 dynamic datasets and corresponding 4 subtraction datasets. Statistical analysis was performed with ANOVA, and systematic changes in features were described by linear and polynomial regression models. RESULTS: ANOVA revealed significant differences (p < 0.05) between normal tissue and lesions in 13 TF, compared to 9 TF between benign and malignant lesions. Most TF showed significant differences in early dynamic and subtraction datasets. TF associated with homogeneity were suitable to discriminate between healthy parenchyma and lesions, whereas run-length features were more suitable to discriminate between benign and malignant lesions. Run length nonuniformity (RLN) was the only feature able to distinguish between all three classes with an AUC of 88.3%. Characteristic changes were observed with a systematic increase or decrease for most TF with mostly polynomial behavior. Slopes showed earlier peaks in malignant lesions, compared to benign lesions. Mean values for the coefficient of determination were higher during subtraction sequences, compared to dynamic sequences (benign: 0.98 vs 0. 72; malignant: 0.94 vs 0.74). CONCLUSIONS: TF of breast lesions follow characteristic patterns during dynamic breast-MRI, distinguishing benign from malignant lesions. Early dynamic and subtraction datasets are particularly suitable for texture analysis in breast-MRI. Features associated with tissue homogeneity seem to be indicative of benign lesions.


Assuntos
Imageamento por Ressonância Magnética , Humanos , Estudos Retrospectivos , Radiografia , Biomarcadores
7.
Insights Imaging ; 14(1): 185, 2023 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-37932462

RESUMO

OBJECTIVES: Development of automated segmentation models enabling standardized volumetric quantification of fibroglandular tissue (FGT) from native volumes and background parenchymal enhancement (BPE) from subtraction volumes of dynamic contrast-enhanced breast MRI. Subsequent assessment of the developed models in the context of FGT and BPE Breast Imaging Reporting and Data System (BI-RADS)-compliant classification. METHODS: For the training and validation of attention U-Net models, data coming from a single 3.0-T scanner was used. For testing, additional data from 1.5-T scanner and data acquired in a different institution with a 3.0-T scanner was utilized. The developed models were used to quantify the amount of FGT and BPE in 80 DCE-MRI examinations, and a correlation between these volumetric measures and the classes assigned by radiologists was performed. RESULTS: To assess the model performance using application-relevant metrics, the correlation between the volumes of breast, FGT, and BPE calculated from ground truth masks and predicted masks was checked. Pearson correlation coefficients ranging from 0.963 ± 0.004 to 0.999 ± 0.001 were achieved. The Spearman correlation coefficient for the quantitative and qualitative assessment, i.e., classification by radiologist, of FGT amounted to 0.70 (p < 0.0001), whereas BPE amounted to 0.37 (p = 0.0006). CONCLUSIONS: Generalizable algorithms for FGT and BPE segmentation were developed and tested. Our results suggest that when assessing FGT, it is sufficient to use volumetric measures alone. However, for the evaluation of BPE, additional models considering voxels' intensity distribution and morphology are required. CRITICAL RELEVANCE STATEMENT: A standardized assessment of FGT density can rely on volumetric measures, whereas in the case of BPE, the volumetric measures constitute, along with voxels' intensity distribution and morphology, an important factor. KEY POINTS: • Our work contributes to the standardization of FGT and BPE assessment. • Attention U-Net can reliably segment intricately shaped FGT and BPE structures. • The developed models were robust to domain shift.

8.
Diagnostics (Basel) ; 12(6)2022 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-35741157

RESUMO

The purpose of this study was to determine the feasibility of a deep convolutional neural network (dCNN) to accurately detect abnormal axillary lymph nodes on mammograms. In this retrospective study, 107 mammographic images in mediolateral oblique projection from 74 patients were labeled to three classes: (1) "breast tissue", (2) "benign lymph nodes", and (3) "suspicious lymph nodes". Following data preprocessing, a dCNN model was trained and validated with 5385 images. Subsequently, the trained dCNN was tested on a "real-world" dataset and the performance compared to human readers. For visualization, colored probability maps of the classification were calculated using a sliding window approach. The accuracy was 98% for the training and 99% for the validation set. Confusion matrices of the "real-world" dataset for the three classes with radiological reports as ground truth yielded an accuracy of 98.51% for breast tissue, 98.63% for benign lymph nodes, and 95.96% for suspicious lymph nodes. Intraclass correlation of the dCNN and the readers was excellent (0.98), and Kappa values were nearly perfect (0.93-0.97). The colormaps successfully detected abnormal lymph nodes with excellent image quality. In this proof-of-principle study in a small patient cohort from a single institution, we found that deep convolutional networks can be trained with high accuracy and reliability to detect abnormal axillary lymph nodes on mammograms.

9.
Eur Radiol Exp ; 6(1): 30, 2022 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-35854186

RESUMO

BACKGROUND: We investigated whether features derived from texture analysis (TA) can distinguish breast density (BD) in spiral photon-counting breast computed tomography (PC-BCT). METHODS: In this retrospective single-centre study, we analysed 10,000 images from 400 PC-BCT examinations of 200 patients. Images were categorised into four-level density scale (a-d) using Breast Imaging Reporting and Data System (BI-RADS)-like criteria. After manual definition of representative regions of interest, 19 texture features (TFs) were calculated to analyse the voxel grey-level distribution in the included image area. ANOVA, cluster analysis, and multinomial logistic regression statistics were used. A human readout then was performed on a subset of 60 images to evaluate the reliability of the proposed feature set. RESULTS: Of the 19 TFs, 4 first-order features and 7 second-order features showed significant correlation with BD and were selected for further analysis. Multinomial logistic regression revealed an overall accuracy of 80% for BD assessment. The majority of TFs systematically increased or decreased with BD. Skewness (rho -0.81), as a first-order feature, and grey-level nonuniformity (GLN, -0.59), as a second-order feature, showed the strongest correlation with BD, independently of other TFs. Mean skewness and GLN decreased linearly from density a to d. Run-length nonuniformity (RLN), as a second-order feature, showed moderate correlation with BD, but resulted in redundant being correlated with GLN. All other TFs showed only weak correlation with BD (range -0.49 to 0.49, p < 0.001) and were neglected. CONCLUSION: TA of PC-BCT images might be a useful approach to assess BD and may serve as an observer-independent tool.


Assuntos
Algoritmos , Densidade da Mama , Humanos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA