Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 8.703
Filtrar
1.
PLoS One ; 19(5): e0302641, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38753596

RESUMO

The development of automated tools using advanced technologies like deep learning holds great promise for improving the accuracy of lung nodule classification in computed tomography (CT) imaging, ultimately reducing lung cancer mortality rates. However, lung nodules can be difficult to detect and classify, from CT images since different imaging modalities may provide varying levels of detail and clarity. Besides, the existing convolutional neural network may struggle to detect nodules that are small or located in difficult-to-detect regions of the lung. Therefore, the attention pyramid pooling network (APPN) is proposed to identify and classify lung nodules. First, a strong feature extractor, named vgg16, is used to obtain features from CT images. Then, the attention primary pyramid module is proposed by combining the attention mechanism and pyramid pooling module, which allows for the fusion of features at different scales and focuses on the most important features for nodule classification. Finally, we use the gated spatial memory technique to decode the general features, which is able to extract more accurate features for classifying lung nodules. The experimental results on the LIDC-IDRI dataset show that the APPN can achieve highly accurate and effective for classifying lung nodules, with sensitivity of 87.59%, specificity of 90.46%, accuracy of 88.47%, positive predictive value of 95.41%, negative predictive value of 76.29% and area under receiver operating characteristic curve of 0.914.


Assuntos
Neoplasias Pulmonares , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico , Tomografia Computadorizada por Raios X/métodos , Aprendizado Profundo , Nódulo Pulmonar Solitário/diagnóstico por imagem , Nódulo Pulmonar Solitário/diagnóstico , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Nódulos Pulmonares Múltiplos/diagnóstico , Algoritmos , Pulmão/diagnóstico por imagem , Pulmão/patologia , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
2.
Radiology ; 311(2): e232286, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38771177

RESUMO

Background Artificial intelligence (AI) is increasingly used to manage radiologists' workloads. The impact of patient characteristics on AI performance has not been well studied. Purpose To understand the impact of patient characteristics (race and ethnicity, age, and breast density) on the performance of an AI algorithm interpreting negative screening digital breast tomosynthesis (DBT) examinations. Materials and Methods This retrospective cohort study identified negative screening DBT examinations from an academic institution from January 1, 2016, to December 31, 2019. All examinations had 2 years of follow-up without a diagnosis of atypia or breast malignancy and were therefore considered true negatives. A subset of unique patients was randomly selected to provide a broad distribution of race and ethnicity. DBT studies in this final cohort were interpreted by a U.S. Food and Drug Administration-approved AI algorithm, which generated case scores (malignancy certainty) and risk scores (1-year subsequent malignancy risk) for each mammogram. Positive examinations were classified based on vendor-provided thresholds for both scores. Multivariable logistic regression was used to understand relationships between the scores and patient characteristics. Results A total of 4855 patients (median age, 54 years [IQR, 46-63 years]) were included: 27% (1316 of 4855) White, 26% (1261 of 4855) Black, 28% (1351 of 4855) Asian, and 19% (927 of 4855) Hispanic patients. False-positive case scores were significantly more likely in Black patients (odds ratio [OR] = 1.5 [95% CI: 1.2, 1.8]) and less likely in Asian patients (OR = 0.7 [95% CI: 0.5, 0.9]) compared with White patients, and more likely in older patients (71-80 years; OR = 1.9 [95% CI: 1.5, 2.5]) and less likely in younger patients (41-50 years; OR = 0.6 [95% CI: 0.5, 0.7]) compared with patients aged 51-60 years. False-positive risk scores were more likely in Black patients (OR = 1.5 [95% CI: 1.0, 2.0]), patients aged 61-70 years (OR = 3.5 [95% CI: 2.4, 5.1]), and patients with extremely dense breasts (OR = 2.8 [95% CI: 1.3, 5.8]) compared with White patients, patients aged 51-60 years, and patients with fatty density breasts, respectively. Conclusion Patient characteristics influenced the case and risk scores of a Food and Drug Administration-approved AI algorithm analyzing negative screening DBT examinations. © RSNA, 2024.


Assuntos
Algoritmos , Inteligência Artificial , Neoplasias da Mama , Mamografia , Humanos , Feminino , Pessoa de Meia-Idade , Estudos Retrospectivos , Mamografia/métodos , Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Idoso , Adulto , Densidade da Mama
3.
Cancer Imaging ; 24(1): 60, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38720391

RESUMO

BACKGROUND: This study systematically compares the impact of innovative deep learning image reconstruction (DLIR, TrueFidelity) to conventionally used iterative reconstruction (IR) on nodule volumetry and subjective image quality (IQ) at highly reduced radiation doses. This is essential in the context of low-dose CT lung cancer screening where accurate volumetry and characterization of pulmonary nodules in repeated CT scanning are indispensable. MATERIALS AND METHODS: A standardized CT dataset was established using an anthropomorphic chest phantom (Lungman, Kyoto Kaguku Inc., Kyoto, Japan) containing a set of 3D-printed lung nodules including six diameters (4 to 9 mm) and three morphology classes (lobular, spiculated, smooth), with an established ground truth. Images were acquired at varying radiation doses (6.04, 3.03, 1.54, 0.77, 0.41 and 0.20 mGy) and reconstructed with combinations of reconstruction kernels (soft and hard kernel) and reconstruction algorithms (ASIR-V and DLIR at low, medium and high strength). Semi-automatic volumetry measurements and subjective image quality scores recorded by five radiologists were analyzed with multiple linear regression and mixed-effect ordinal logistic regression models. RESULTS: Volumetric errors of nodules imaged with DLIR are up to 50% lower compared to ASIR-V, especially at radiation doses below 1 mGy and when reconstructed with a hard kernel. Also, across all nodule diameters and morphologies, volumetric errors are commonly lower with DLIR. Furthermore, DLIR renders higher subjective IQ, especially at the sub-mGy doses. Radiologists were up to nine times more likely to score the highest IQ-score to these images compared to those reconstructed with ASIR-V. Lung nodules with irregular margins and small diameters also had an increased likelihood (up to five times more likely) to be ascribed the best IQ scores when reconstructed with DLIR. CONCLUSION: We observed that DLIR performs as good as or even outperforms conventionally used reconstruction algorithms in terms of volumetric accuracy and subjective IQ of nodules in an anthropomorphic chest phantom. As such, DLIR potentially allows to lower the radiation dose to participants of lung cancer screening without compromising accurate measurement and characterization of lung nodules.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiplos , Imagens de Fantasmas , Doses de Radiação , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Nódulos Pulmonares Múltiplos/patologia , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Nódulo Pulmonar Solitário/diagnóstico por imagem , Nódulo Pulmonar Solitário/patologia , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
Biomed Phys Eng Express ; 10(4)2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38701765

RESUMO

Purpose. To improve breast cancer risk prediction for young women, we have developed deep learning methods to estimate mammographic density from low dose mammograms taken at approximately 1/10th of the usual dose. We investigate the quality and reliability of the density scores produced on low dose mammograms focussing on how image resolution and levels of training affect the low dose predictions.Methods. Deep learning models are developed and tested, with two feature extraction methods and an end-to-end trained method, on five different resolutions of 15,290 standard dose and simulated low dose mammograms with known labels. The models are further tested on a dataset with 296 matching standard and real low dose images allowing performance on the low dose images to be ascertained.Results. Prediction quality on standard and simulated low dose images compared to labels is similar for all equivalent model training and image resolution versions. Increasing resolution results in improved performance of both feature extraction methods for standard and simulated low dose images, while the trained models show high performance across the resolutions. For the trained models the Spearman rank correlation coefficient between predictions of standard and low dose images at low resolution is 0.951 (0.937 to 0.960) and at the highest resolution 0.956 (0.942 to 0.965). If pairs of model predictions are averaged, similarity increases.Conclusions. Deep learning mammographic density predictions on low dose mammograms are highly correlated with standard dose equivalents for feature extraction and end-to-end approaches across multiple image resolutions. Deep learning models can reliably make high quality mammographic density predictions on low dose mammograms.


Assuntos
Densidade da Mama , Neoplasias da Mama , Aprendizado Profundo , Mamografia , Doses de Radiação , Humanos , Mamografia/métodos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Algoritmos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
6.
Biomed Phys Eng Express ; 10(4)2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38744255

RESUMO

Purpose. To develop a method to extract statistical low-contrast detectability (LCD) and contrast-detail (C-D) curves from clinical patient images.Method. We used the region of air surrounding the patient as an alternative for a homogeneous region within a patient. A simple graphical user interface (GUI) was created to set the initial configuration for region of interest (ROI), ROI size, and minimum detectable contrast (MDC). The process was started by segmenting the air surrounding the patient with a threshold between -980 HU (Hounsfield units) and -1024 HU to get an air mask. The mask was trimmed using the patient center coordinates to avoid distortion from the patient table. It was used to automatically place square ROIs of a predetermined size. The mean pixel values in HU within each ROI were calculated, and the standard deviation (SD) from all the means was obtained. The MDC for a particular target size was generated by multiplying the SD by 3.29. A C-D curve was obtained by iterating this process for the other ROI sizes. This method was applied to the homogeneous area from the uniformity module of an ACR CT phantom to find the correlation between the parameters inside and outside the phantom, for 30 thoracic, 26 abdominal, and 23 head images.Results. The phantom images showed a significant linear correlation between the LCDs obtained from outside and inside the phantom, with R2values of 0.67 and 0.99 for variations in tube currents and tube voltages. This indicated that the air region outside the phantom can act as a surrogate for the homogenous region inside the phantom to obtain the LCD and C-D curves.Conclusion. The C-D curves obtained from outside the ACR CT phantom show a strong linear correlation with those from inside the phantom. The proposed method can also be used to extract the LCD from patient images by using the region of air outside as a surrogate for a region inside the patient.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Interface Usuário-Computador , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
7.
Eur Radiol Exp ; 8(1): 63, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38764066

RESUMO

BACKGROUND: Emphysema influences the appearance of lung tissue in computed tomography (CT). We evaluated whether this affects lung nodule detection by artificial intelligence (AI) and human readers (HR). METHODS: Individuals were selected from the "Lifelines" cohort who had undergone low-dose chest CT. Nodules in individuals without emphysema were matched to similar-sized nodules in individuals with at least moderate emphysema. AI results for nodular findings of 30-100 mm3 and 101-300 mm3 were compared to those of HR; two expert radiologists blindly reviewed discrepancies. Sensitivity and false positives (FPs)/scan were compared for emphysema and non-emphysema groups. RESULTS: Thirty-nine participants with and 82 without emphysema were included (n = 121, aged 61 ± 8 years (mean ± standard deviation), 58/121 males (47.9%)). AI and HR detected 196 and 206 nodular findings, respectively, yielding 109 concordant nodules and 184 discrepancies, including 118 true nodules. For AI, sensitivity was 0.68 (95% confidence interval 0.57-0.77) in emphysema versus 0.71 (0.62-0.78) in non-emphysema, with FPs/scan 0.51 and 0.22, respectively (p = 0.028). For HR, sensitivity was 0.76 (0.65-0.84) and 0.80 (0.72-0.86), with FPs/scan of 0.15 and 0.27 (p = 0.230). Overall sensitivity was slightly higher for HR than for AI, but this difference disappeared after the exclusion of benign lymph nodes. FPs/scan were higher for AI in emphysema than in non-emphysema (p = 0.028), while FPs/scan for HR were higher than AI for 30-100 mm3 nodules in non-emphysema (p = 0.009). CONCLUSIONS: AI resulted in more FPs/scan in emphysema compared to non-emphysema, a difference not observed for HR. RELEVANCE STATEMENT: In the creation of a benchmark dataset to validate AI software for lung nodule detection, the inclusion of emphysema cases is important due to the additional number of FPs. KEY POINTS: • The sensitivity of nodule detection by AI was similar in emphysema and non-emphysema. • AI had more FPs/scan in emphysema compared to non-emphysema. • Sensitivity and FPs/scan by the human reader were comparable for emphysema and non-emphysema. • Emphysema and non-emphysema representation in benchmark dataset is important for validating AI.


Assuntos
Inteligência Artificial , Enfisema Pulmonar , Tomografia Computadorizada por Raios X , Humanos , Masculino , Pessoa de Meia-Idade , Feminino , Tomografia Computadorizada por Raios X/métodos , Enfisema Pulmonar/diagnóstico por imagem , Software , Sensibilidade e Especificidade , Neoplasias Pulmonares/diagnóstico por imagem , Idoso , Doses de Radiação , Nódulo Pulmonar Solitário/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
8.
BMC Med Inform Decis Mak ; 24(1): 126, 2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38755563

RESUMO

BACKGROUND: Chest X-ray imaging based abnormality localization, essential in diagnosing various diseases, faces significant clinical challenges due to complex interpretations and the growing workload of radiologists. While recent advances in deep learning offer promising solutions, there is still a critical issue of domain inconsistency in cross-domain transfer learning, which hampers the efficiency and accuracy of diagnostic processes. This study aims to address the domain inconsistency problem and improve autonomic abnormality localization performance of heterogeneous chest X-ray image analysis, particularly in detecting abnormalities, by developing a self-supervised learning strategy called "BarlwoTwins-CXR". METHODS: We utilized two publicly available datasets: the NIH Chest X-ray Dataset and the VinDr-CXR. The BarlowTwins-CXR approach was conducted in a two-stage training process. Initially, self-supervised pre-training was performed using an adjusted Barlow Twins algorithm on the NIH dataset with a Resnet50 backbone pre-trained on ImageNet. This was followed by supervised fine-tuning on the VinDr-CXR dataset using Faster R-CNN with Feature Pyramid Network (FPN). The study employed mean Average Precision (mAP) at an Intersection over Union (IoU) of 50% and Area Under the Curve (AUC) for performance evaluation. RESULTS: Our experiments showed a significant improvement in model performance with BarlowTwins-CXR. The approach achieved a 3% increase in mAP50 accuracy compared to traditional ImageNet pre-trained models. In addition, the Ablation CAM method revealed enhanced precision in localizing chest abnormalities. The study involved 112,120 images from the NIH dataset and 18,000 images from the VinDr-CXR dataset, indicating robust training and testing samples. CONCLUSION: BarlowTwins-CXR significantly enhances the efficiency and accuracy of chest X-ray image-based abnormality localization, outperforming traditional transfer learning methods and effectively overcoming domain inconsistency in cross-domain scenarios. Our experiment results demonstrate the potential of using self-supervised learning to improve the generalizability of models in medical settings with limited amounts of heterogeneous data. This approach can be instrumental in aiding radiologists, particularly in high-workload environments, offering a promising direction for future AI-driven healthcare solutions.


Assuntos
Radiografia Torácica , Aprendizado de Máquina Supervisionado , Humanos , Aprendizado Profundo , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Conjuntos de Dados como Assunto
10.
Radiology ; 311(2): e233270, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38713028

RESUMO

Background Generating radiologic findings from chest radiographs is pivotal in medical image analysis. The emergence of OpenAI's generative pretrained transformer, GPT-4 with vision (GPT-4V), has opened new perspectives on the potential for automated image-text pair generation. However, the application of GPT-4V to real-world chest radiography is yet to be thoroughly examined. Purpose To investigate the capability of GPT-4V to generate radiologic findings from real-world chest radiographs. Materials and Methods In this retrospective study, 100 chest radiographs with free-text radiology reports were annotated by a cohort of radiologists, two attending physicians and three residents, to establish a reference standard. Of 100 chest radiographs, 50 were randomly selected from the National Institutes of Health (NIH) chest radiographic data set, and 50 were randomly selected from the Medical Imaging and Data Resource Center (MIDRC). The performance of GPT-4V at detecting imaging findings from each chest radiograph was assessed in the zero-shot setting (where it operates without prior examples) and few-shot setting (where it operates with two examples). Its outcomes were compared with the reference standard with regards to clinical conditions and their corresponding codes in the International Statistical Classification of Diseases, Tenth Revision (ICD-10), including the anatomic location (hereafter, laterality). Results In the zero-shot setting, in the task of detecting ICD-10 codes alone, GPT-4V attained an average positive predictive value (PPV) of 12.3%, average true-positive rate (TPR) of 5.8%, and average F1 score of 7.3% on the NIH data set, and an average PPV of 25.0%, average TPR of 16.8%, and average F1 score of 18.2% on the MIDRC data set. When both the ICD-10 codes and their corresponding laterality were considered, GPT-4V produced an average PPV of 7.8%, average TPR of 3.5%, and average F1 score of 4.5% on the NIH data set, and an average PPV of 10.9%, average TPR of 4.9%, and average F1 score of 6.4% on the MIDRC data set. With few-shot learning, GPT-4V showed improved performance on both data sets. When contrasting zero-shot and few-shot learning, there were improved average TPRs and F1 scores in the few-shot setting, but there was not a substantial increase in the average PPV. Conclusion Although GPT-4V has shown promise in understanding natural images, it had limited effectiveness in interpreting real-world chest radiographs. © RSNA, 2024 Supplemental material is available for this article.


Assuntos
Radiografia Torácica , Humanos , Radiografia Torácica/métodos , Estudos Retrospectivos , Feminino , Masculino , Pessoa de Meia-Idade , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Idoso , Adulto
11.
Radiology ; 311(2): e232178, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38742970

RESUMO

Background Accurate characterization of suspicious small renal masses is crucial for optimized management. Deep learning (DL) algorithms may assist with this effort. Purpose To develop and validate a DL algorithm for identifying benign small renal masses at contrast-enhanced multiphase CT. Materials and Methods Surgically resected renal masses measuring 3 cm or less in diameter at contrast-enhanced CT were included. The DL algorithm was developed by using retrospective data from one hospital between 2009 and 2021, with patients randomly allocated in a training and internal test set ratio of 8:2. Between 2013 and 2021, external testing was performed on data from five independent hospitals. A prospective test set was obtained between 2021 and 2022 from one hospital. Algorithm performance was evaluated by using the area under the receiver operating characteristic curve (AUC) and compared with the results of seven clinicians using the DeLong test. Results A total of 1703 patients (mean age, 56 years ± 12 [SD]; 619 female) with a single renal mass per patient were evaluated. The retrospective data set included 1063 lesions (874 in training set, 189 internal test set); the multicenter external test set included 537 lesions (12.3%, 66 benign) with 89 subcentimeter (≤1 cm) lesions (16.6%); and the prospective test set included 103 lesions (13.6%, 14 benign) with 20 (19.4%) subcentimeter lesions. The DL algorithm performance was comparable with that of urological radiologists: for the external test set, AUC was 0.80 (95% CI: 0.75, 0.85) versus 0.84 (95% CI: 0.78, 0.88) (P = .61); for the prospective test set, AUC was 0.87 (95% CI: 0.79, 0.93) versus 0.92 (95% CI: 0.86, 0.96) (P = .70). For subcentimeter lesions in the external test set, the algorithm and urological radiologists had similar AUC of 0.74 (95% CI: 0.63, 0.83) and 0.81 (95% CI: 0.68, 0.92) (P = .78), respectively. Conclusion The multiphase CT-based DL algorithm showed comparable performance with that of radiologists for identifying benign small renal masses, including lesions of 1 cm or less. Published under a CC BY 4.0 license. Supplemental material is available for this article.


Assuntos
Meios de Contraste , Aprendizado Profundo , Neoplasias Renais , Tomografia Computadorizada por Raios X , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/patologia , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Estudos Prospectivos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Idoso , Algoritmos , Rim/diagnóstico por imagem , Adulto
12.
Eur Radiol Exp ; 8(1): 54, 2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38698099

RESUMO

BACKGROUND: We aimed to improve the image quality (IQ) of sparse-view computed tomography (CT) images using a U-Net for lung metastasis detection and determine the best tradeoff between number of views, IQ, and diagnostic confidence. METHODS: CT images from 41 subjects aged 62.8 ± 10.6 years (mean ± standard deviation, 23 men), 34 with lung metastasis, 7 healthy, were retrospectively selected (2016-2018) and forward projected onto 2,048-view sinograms. Six corresponding sparse-view CT data subsets at varying levels of undersampling were reconstructed from sinograms using filtered backprojection with 16, 32, 64, 128, 256, and 512 views. A dual-frame U-Net was trained and evaluated for each subsampling level on 8,658 images from 22 diseased subjects. A representative image per scan was selected from 19 subjects (12 diseased, 7 healthy) for a single-blinded multireader study. These slices, for all levels of subsampling, with and without U-Net postprocessing, were presented to three readers. IQ and diagnostic confidence were ranked using predefined scales. Subjective nodule segmentation was evaluated using sensitivity and Dice similarity coefficient (DSC); clustered Wilcoxon signed-rank test was used. RESULTS: The 64-projection sparse-view images resulted in 0.89 sensitivity and 0.81 DSC, while their counterparts, postprocessed with the U-Net, had improved metrics (0.94 sensitivity and 0.85 DSC) (p = 0.400). Fewer views led to insufficient IQ for diagnosis. For increased views, no substantial discrepancies were noted between sparse-view and postprocessed images. CONCLUSIONS: Projection views can be reduced from 2,048 to 64 while maintaining IQ and the confidence of the radiologists on a satisfactory level. RELEVANCE STATEMENT: Our reader study demonstrates the benefit of U-Net postprocessing for regular CT screenings of patients with lung metastasis to increase the IQ and diagnostic confidence while reducing the dose. KEY POINTS: • Sparse-projection-view streak artifacts reduce the quality and usability of sparse-view CT images. • U-Net-based postprocessing removes sparse-view artifacts while maintaining diagnostically accurate IQ. • Postprocessed sparse-view CTs drastically increase radiologists' confidence in diagnosing lung metastasis.


Assuntos
Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Tomografia Computadorizada por Raios X/métodos , Feminino , Estudos Retrospectivos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Idoso
13.
Comput Biol Med ; 173: 108361, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569236

RESUMO

Deep learning plays a significant role in the detection of pulmonary nodules in low-dose computed tomography (LDCT) scans, contributing to the diagnosis and treatment of lung cancer. Nevertheless, its effectiveness often relies on the availability of extensive, meticulously annotated dataset. In this paper, we explore the utilization of an incompletely annotated dataset for pulmonary nodules detection and introduce the FULFIL (Forecasting Uncompleted Labels For Inexpensive Lung nodule detection) algorithm as an innovative approach. By instructing annotators to label only the nodules they are most confident about, without requiring complete coverage, we can substantially reduce annotation costs. Nevertheless, this approach results in an incompletely annotated dataset, which presents challenges when training deep learning models. Within the FULFIL algorithm, we employ Graph Convolution Network (GCN) to discover the relationships between annotated and unannotated nodules for self-adaptively completing the annotation. Meanwhile, a teacher-student framework is employed for self-adaptive learning using the completed annotation dataset. Furthermore, we have designed a Dual-Views loss to leverage different data perspectives, aiding the model in acquiring robust features and enhancing generalization. We carried out experiments using the LUng Nodule Analysis (LUNA) dataset, achieving a sensitivity of 0.574 at a False positives per scan (FPs/scan) of 0.125 with only 10% instance-level annotations for nodules. This performance outperformed comparative methods by 7.00%. Experimental comparisons were conducted to evaluate the performance of our model and human experts on test dataset. The results demonstrate that our model can achieve a comparable level of performance to that of human experts. The comprehensive experimental results demonstrate that FULFIL can effectively leverage an incomplete pulmonary nodule dataset to develop a robust deep learning model, making it a promising tool for assisting in lung nodule detection.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Pulmão/diagnóstico por imagem
14.
Comput Biol Med ; 175: 108505, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38688129

RESUMO

The latest developments in deep learning have demonstrated the importance of CT medical imaging for the classification of pulmonary nodules. However, challenges remain in fully leveraging the relevant medical annotations of pulmonary nodules and distinguishing between the benign and malignant labels of adjacent nodules. Therefore, this paper proposes the Nodule-CLIP model, which deeply mines the potential relationship between CT images, complex attributes of lung nodules, and benign and malignant attributes of lung nodules through a comparative learning method, and optimizes the model in the image feature extraction network by using its similarities and differences to improve its ability to distinguish similar lung nodules. Firstly, we segment the 3D lung nodule information by U-Net to reduce the interference caused by the background of lung nodules and focus on the lung nodule images. Secondly, the image features, class features, and complex attribute features are aligned by contrastive learning and loss function in Nodule-CLIP to achieve lung nodule image optimization and improve classification ability. A series of testing and ablation experiments were conducted on the public dataset LIDC-IDRI, and the final benign and malignant classification rate was 90.6%, and the recall rate was 92.81%. The experimental results show the advantages of this method in terms of lung nodule classification as well as interpretability.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/classificação , Neoplasias Pulmonares/patologia , Tomografia Computadorizada por Raios X/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Aprendizado Profundo , Pulmão/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Bases de Dados Factuais
15.
J Cancer Res Ther ; 20(2): 615-624, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38687932

RESUMO

AIM: The accurate reconstruction of cone-beam computed tomography (CBCT) from sparse projections is one of the most important areas for study. The compressed sensing theory has been widely employed in the sparse reconstruction of CBCT. However, the total variation (TV) approach solely uses information from the i-coordinate, j-coordinate, and k-coordinate gradients to reconstruct the CBCT image. MATERIALS AND METHODS: It is well recognized that the CBCT image can be reconstructed more accurately with more gradient information from different directions. Thus, this study introduces a novel approach, named the new multi-gradient direction total variation minimization method. The method uses gradient information from the ij-coordinate, ik-coordinate, and jk-coordinate directions to reconstruct CBCT images, which incorporates nine different types of gradient information from nine directions. RESULTS: This study assessed the efficacy of the proposed methodology using under-sampled projections from four different experiments, including two digital phantoms, one patient's head dataset, and one physical phantom dataset. The results indicated that the proposed method achieved the lowest RMSE index and the highest SSIM index. Meanwhile, we compared the voxel intensity curves of the reconstructed images to assess the edge structure preservation. Among the various methods compared, the curves generated by the proposed method exhibited the highest level of consistency with the gold standard image curves. CONCLUSION: In summary, the proposed method showed significant potential in enhancing the quality and accuracy of CBCT image reconstruction.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Cabeça/diagnóstico por imagem
16.
Eur J Radiol ; 175: 111457, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38640824

RESUMO

PURPOSE: This review provides an overview of the current state of artificial intelligence (AI) technology for automated detection of breast cancer in digital mammography (DM) and digital breast tomosynthesis (DBT). It aims to discuss the technology, available AI systems, and the challenges faced by AI in breast cancer screening. METHODS: The review examines the development of AI technology in breast cancer detection, focusing on deep learning (DL) techniques and their differences from traditional computer-aided detection (CAD) systems. It discusses data pre-processing, learning paradigms, and the need for independent validation approaches. RESULTS: DL-based AI systems have shown significant improvements in breast cancer detection. They have the potential to enhance screening outcomes, reduce false negatives and positives, and detect subtle abnormalities missed by human observers. However, challenges like the lack of standardised datasets, potential bias in training data, and regulatory approval hinder their widespread adoption. CONCLUSIONS: AI technology has the potential to improve breast cancer screening by increasing accuracy and reducing radiologist workload. DL-based AI systems show promise in enhancing detection performance and eliminating variability among observers. Standardised guidelines and trustworthy AI practices are necessary to ensure fairness, traceability, and robustness. Further research and validation are needed to establish clinical trust in AI. Collaboration between researchers, clinicians, and regulatory bodies is crucial to address challenges and promote AI implementation in breast cancer screening.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Mamografia , Neoplasias da Mama/diagnóstico por imagem , Humanos , Feminino , Mamografia/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Detecção Precoce de Câncer/métodos
17.
Radiol Artif Intell ; 6(3): e230318, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38568095

RESUMO

Purpose To develop an artificial intelligence (AI) model for the diagnosis of breast cancer on digital breast tomosynthesis (DBT) images and to investigate whether it could improve diagnostic accuracy and reduce radiologist reading time. Materials and Methods A deep learning AI algorithm was developed and validated for DBT with retrospectively collected examinations (January 2010 to December 2021) from 14 institutions in the United States and South Korea. A multicenter reader study was performed to compare the performance of 15 radiologists (seven breast specialists, eight general radiologists) in interpreting DBT examinations in 258 women (mean age, 56 years ± 13.41 [SD]), including 65 cancer cases, with and without the use of AI. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were evaluated. Results The AUC for stand-alone AI performance was 0.93 (95% CI: 0.92, 0.94). With AI, radiologists' AUC improved from 0.90 (95% CI: 0.86, 0.93) to 0.92 (95% CI: 0.88, 0.96) (P = .003) in the reader study. AI showed higher specificity (89.64% [95% CI: 85.34%, 93.94%]) than radiologists (77.34% [95% CI: 75.82%, 78.87%]) (P < .001). When reading with AI, radiologists' sensitivity increased from 85.44% (95% CI: 83.22%, 87.65%) to 87.69% (95% CI: 85.63%, 89.75%) (P = .04), with no evidence of a difference in specificity. Reading time decreased from 54.41 seconds (95% CI: 52.56, 56.27) without AI to 48.52 seconds (95% CI: 46.79, 50.25) with AI (P < .001). Interreader agreement measured by Fleiss κ increased from 0.59 to 0.62. Conclusion The AI model showed better diagnostic accuracy than radiologists in breast cancer detection, as well as reduced reading times. The concurrent use of AI in DBT interpretation could improve both accuracy and efficiency. Keywords: Breast, Computer-Aided Diagnosis (CAD), Tomosynthesis, Artificial Intelligence, Digital Breast Tomosynthesis, Breast Cancer, Computer-Aided Detection, Screening Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Bae in this issue.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Mamografia , Sensibilidade e Especificidade , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Pessoa de Meia-Idade , Mamografia/métodos , Estudos Retrospectivos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , República da Coreia/epidemiologia , Aprendizado Profundo , Adulto , Fatores de Tempo , Algoritmos , Estados Unidos , Reprodutibilidade dos Testes
18.
J Appl Clin Med Phys ; 25(5): e14337, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38576183

RESUMO

PURPOSE: The quality of on-board imaging systems, including cone-beam computed tomography (CBCT), plays a vital role in image-guided radiation therapy (IGRT) and adaptive radiotherapy. Recently, there has been an upgrade of the CBCT systems fused in the O-ring linear accelerators called HyperSight, featuring a high imaging performance. As the characterization of a new imaging system is essential, we evaluated the image quality of the HyperSight system by comparing it with Halcyon 3.0 CBCT and providing benchmark data for routine imaging quality assurance. METHODS: The HyperSight features ultra-fast scan time, a larger kilovoltage (kV) detector, a more substantial kV tube, and an advanced reconstruction algorithm. Imaging protocols in the two modes of operation, treatment mode with IGRT and the CBCT for planning (CBCTp) mode were evaluated and compared with Halcyon 3.0 CBCT. Image quality metrics, including spatial resolution, contrast resolution, uniformity, noise, computed tomography (CT) number linearity, and calibration error, were assessed using a Catphan and an electron density phantom and analyzed with TotalQA software. RESULTS: HyperSight demonstrated substantial improvements in contrast-to-noise ratio and noise in both IGRT and CBCTp modes compared to Halcyon 3.0 CBCT. CT number calibration error of HyperSight CBCTp mode (1.06%) closely matches that of a full CT scanner (0.72%), making it suitable for adaptive planning. In addition, the advanced hardware of HyperSight, such as ultra-fast scan time (5.9 s) or 2.5 times larger heat unit capacity, enhanced the clinical efficiency in our experience. CONCLUSIONS: HyperSight represented a significant advancement in CBCT imaging. With its image quality, CT number accuracy, and ultra-fast scans, HyperSight has a potential to transform patient care and treatment outcomes. The enhanced scan speed and image quality of HyperSight are expected to significantly improve the quality and efficiency of treatment, particularly benefiting patients.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Aceleradores de Partículas , Imagens de Fantasmas , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Radioterapia Guiada por Imagem , Tomografia Computadorizada de Feixe Cônico/métodos , Aceleradores de Partículas/instrumentação , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Radioterapia Guiada por Imagem/métodos , Radioterapia de Intensidade Modulada/métodos , Garantia da Qualidade dos Cuidados de Saúde/normas , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
19.
Eur J Radiol ; 175: 111448, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38574510

RESUMO

PURPOSE: Aim of the recent study is to point out a method to optimize quality of CT scans in oncological patients with port systems. This study investigates the potential of photon counting computed tomography (PCCT) for reduction of beam hardening artifacts caused by port-implants in chest imaging by means of spectral reconstructions. METHOD: In this retrospective single-center study, 8 ROIs for 19 spectral reconstructions (polyenergetic imaging, monoenergetic reconstructions from 40 to 190 keV as well as iodine maps and virtual non contrast (VNC)) of 49 patients with pectoral port systems undergoing PCCT of the chest for staging of oncologic disease were measured. Mean values and standard deviation (SD) Hounsfield unit measurements of port-chamber associated hypo- and hyperdense artifacts, bilateral muscles and vessels has been carried out. Also, a structured assessment of artifacts and imaging findings was performed by two radiologists. RESULTS: A significant association of keV with iodine contrast as well as artifact intensity was noted (all p < 0.001). In qualitative assessment, utilization of 120 keV monoenergetic reconstructions could reduce severe and pronounced artifacts completely, as compared to lower keV reconstructions (p < 0.001). Regarding imaging findings, no significant difference between monoenergetic reconstructions was noted (all p > 0.05). In cases with very high iodine concentrations in the subclavian vein, image distortions were noted at 40 keV images (p < 0.01). CONCLUSIONS: The present study demonstrates that PCCT derived spectral reconstructions can be used in oncological imaging of the thorax to reduce port-derived beam-hardening artefacts. When evaluating image data sets within a staging, it can be particularly helpful to consider the 120 keV VMIs, in which the artefacts are comparatively low.


Assuntos
Artefatos , Radiografia Torácica , Tomografia Computadorizada por Raios X , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Tomografia Computadorizada por Raios X/métodos , Radiografia Torácica/métodos , Estudos Retrospectivos , Adulto , Idoso de 80 Anos ou mais , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Fótons , Reprodutibilidade dos Testes
20.
Eur J Radiol ; 175: 111460, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38608501

RESUMO

BACKGROUND: Traumatic knee injuries are challenging to diagnose accurately through radiography and to a lesser extent, through CT, with fractures sometimes overlooked. Ancillary signs like joint effusion or lipo-hemarthrosis are indicative of fractures, suggesting the need for further imaging. Artificial Intelligence (AI) can automate image analysis, improving diagnostic accuracy and help prioritizing clinically important X-ray or CT studies. OBJECTIVE: To develop and evaluate an AI algorithm for detecting effusion of any kind in knee X-rays and selected CT images and distinguishing between simple effusion and lipo-hemarthrosis indicative of intra-articular fractures. METHODS: This retrospective study analyzed post traumatic knee imaging from January 2016 to February 2023, categorizing images into lipo-hemarthrosis, simple effusion, or normal. It utilized the FishNet-150 algorithm for image classification, with class activation maps highlighting decision-influential regions. The AI's diagnostic accuracy was validated against a gold standard, based on the evaluations made by a radiologist with at least four years of experience. RESULTS: Analysis included CT images from 515 patients and X-rays from 637 post traumatic patients, identifying lipo-hemarthrosis, simple effusion, and normal findings. The AI showed an AUC of 0.81 for detecting any effusion, 0.78 for simple effusion, and 0.83 for lipo-hemarthrosis in X-rays; and 0.89, 0.89, and 0.91, respectively, in CTs. CONCLUSION: The AI algorithm effectively detects knee effusion and differentiates between simple effusion and lipo-hemarthrosis in post-traumatic patients for both X-rays and selected CT images further studies are needed to validate these results.


Assuntos
Inteligência Artificial , Hemartrose , Traumatismos do Joelho , Tomografia Computadorizada por Raios X , Humanos , Traumatismos do Joelho/diagnóstico por imagem , Traumatismos do Joelho/complicações , Tomografia Computadorizada por Raios X/métodos , Feminino , Masculino , Estudos Retrospectivos , Hemartrose/diagnóstico por imagem , Hemartrose/etiologia , Pessoa de Meia-Idade , Adulto , Algoritmos , Idoso , Exsudatos e Transudatos/diagnóstico por imagem , Idoso de 80 Anos ou mais , Adulto Jovem , Adolescente , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Articulação do Joelho/diagnóstico por imagem , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA