Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 506
Filtrar
2.
J Med Imaging (Bellingham) ; 11(4): 045501, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38988989

RESUMEN

Purpose: Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors. Approach: Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC). Results: The CNN-CADe improved the 3D search for the small microcalcification signal ( Δ AUC = 0.098 , p = 0.0002 ) and the 2D search for the large mass signal ( Δ AUC = 0.076 , p = 0.002 ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( Δ Δ AUC = 0.066 , p = 0.035 ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( r = - 0.528 , p = 0.036 ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( Δ Δ AUC = 0.033 , p = 0.133 ). Conclusion: The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.

3.
Cancers (Basel) ; 16(13)2024 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-39001465

RESUMEN

The early detection of pancreatic ductal adenocarcinoma (PDAC) is essential for optimal treatment of pancreatic cancer patients. We propose a tumor detection framework to improve the detection of pancreatic head tumors on CT scans. In this retrospective research study, CT images of 99 patients with pancreatic head cancer and 98 control cases from the Catharina Hospital Eindhoven were collected. A multi-stage 3D U-Net-based approach was used for PDAC detection including clinically significant secondary features such as pancreatic duct and common bile duct dilation. The developed algorithm was evaluated using a local test set comprising 59 CT scans. The model was externally validated in 28 pancreatic cancer cases of a publicly available medical decathlon dataset. The tumor detection framework achieved a sensitivity of 0.97 and a specificity of 1.00, with an area under the receiver operating curve (AUROC) of 0.99, in detecting pancreatic head cancer in the local test set. In the external test set, we obtained similar results, with a sensitivity of 1.00. The model provided the tumor location with acceptable accuracy obtaining a DICE Similarity Coefficient (DSC) of 0.37. This study shows that a tumor detection framework utilizing CT scans and secondary signs of pancreatic cancer can detect pancreatic tumors with high accuracy.

4.
Phys Med ; 124: 103433, 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-39002423

RESUMEN

PURPOSE: Early detection of breast cancer has a significant effect on reducing its mortality rate. For this purpose, automated three-dimensional breast ultrasound (3-D ABUS) has been recently used alongside mammography. The 3-D volume produced in this imaging system includes many slices. The radiologist must review all the slices to find the mass, a time-consuming task with a high probability of mistakes. Therefore, many computer-aided detection (CADe) systems have been developed to assist radiologists in this task. In this paper, we propose a novel CADe system for mass detection in 3-D ABUS images. METHODS: The proposed system includes two cascaded convolutional neural networks. The goal of the first network is to achieve the highest possible sensitivity, and the second network's goal is to reduce false positives while maintaining high sensitivity. In both networks, an improved version of 3-D U-Net architecture is utilized in which two types of modified Inception modules are used in the encoder section. In the second network, new attention units are also added to the skip connections that receive the results of the first network as saliency maps. RESULTS: The system was evaluated on a dataset containing 60 3-D ABUS volumes from 43 patients and 55 masses. A sensitivity of 91.48% and a mean false positive of 8.85 per patient were achieved. CONCLUSIONS: The suggested mass detection system is fully automatic without any user interaction. The results indicate that the sensitivity and the mean FP per patient of the CADe system outperform competing techniques.

5.
JMIR AI ; 3: e52211, 2024 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-38875574

RESUMEN

BACKGROUND: Many promising artificial intelligence (AI) and computer-aided detection and diagnosis systems have been developed, but few have been successfully integrated into clinical practice. This is partially owing to a lack of user-centered design of AI-based computer-aided detection or diagnosis (AI-CAD) systems. OBJECTIVE: We aimed to assess the impact of different onboarding tutorials and levels of AI model explainability on radiologists' trust in AI and the use of AI recommendations in lung nodule assessment on computed tomography (CT) scans. METHODS: In total, 20 radiologists from 7 Dutch medical centers performed lung nodule assessment on CT scans under different conditions in a simulated use study as part of a 2×2 repeated-measures quasi-experimental design. Two types of AI onboarding tutorials (reflective vs informative) and 2 levels of AI output (black box vs explainable) were designed. The radiologists first received an onboarding tutorial that was either informative or reflective. Subsequently, each radiologist assessed 7 CT scans, first without AI recommendations. AI recommendations were shown to the radiologist, and they could adjust their initial assessment. Half of the participants received the recommendations via black box AI output and half received explainable AI output. Mental model and psychological trust were measured before onboarding, after onboarding, and after assessing the 7 CT scans. We recorded whether radiologists changed their assessment on found nodules, malignancy prediction, and follow-up advice for each CT assessment. In addition, we analyzed whether radiologists' trust in their assessments had changed based on the AI recommendations. RESULTS: Both variations of onboarding tutorials resulted in a significantly improved mental model of the AI-CAD system (informative P=.01 and reflective P=.01). After using AI-CAD, psychological trust significantly decreased for the group with explainable AI output (P=.02). On the basis of the AI recommendations, radiologists changed the number of reported nodules in 27 of 140 assessments, malignancy prediction in 32 of 140 assessments, and follow-up advice in 12 of 140 assessments. The changes were mostly an increased number of reported nodules, a higher estimated probability of malignancy, and earlier follow-up. The radiologists' confidence in their found nodules changed in 82 of 140 assessments, in their estimated probability of malignancy in 50 of 140 assessments, and in their follow-up advice in 28 of 140 assessments. These changes were predominantly increases in confidence. The number of changed assessments and radiologists' confidence did not significantly differ between the groups that received different onboarding tutorials and AI outputs. CONCLUSIONS: Onboarding tutorials help radiologists gain a better understanding of AI-CAD and facilitate the formation of a correct mental model. If AI explanations do not consistently substantiate the probability of malignancy across patient cases, radiologists' trust in the AI-CAD system can be impaired. Radiologists' confidence in their assessments was improved by using the AI recommendations.

6.
EBioMedicine ; 104: 105183, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38848616

RESUMEN

BACKGROUND: Contrast-enhanced CT scans provide a means to detect unsuspected colorectal cancer. However, colorectal cancers in contrast-enhanced CT without bowel preparation may elude detection by radiologists. We aimed to develop a deep learning (DL) model for accurate detection of colorectal cancer, and evaluate whether it could improve the detection performance of radiologists. METHODS: We developed a DL model using a manually annotated dataset (1196 cancer vs 1034 normal). The DL model was tested using an internal test set (98 vs 115), two external test sets (202 vs 265 in 1, and 252 vs 481 in 2), and a real-world test set (53 vs 1524). We compared the detection performance of the DL model with radiologists, and evaluated its capacity to enhance radiologists' detection performance. FINDINGS: In the four test sets, the DL model had the area under the receiver operating characteristic curves (AUCs) ranging between 0.957 and 0.994. In both the internal test set and external test set 1, the DL model yielded higher accuracy than that of radiologists (97.2% vs 86.0%, p < 0.0001; 94.9% vs 85.3%, p < 0.0001), and significantly improved the accuracy of radiologists (93.4% vs 86.0%, p < 0.0001; 93.6% vs 85.3%, p < 0.0001). In the real-world test set, the DL model delivered sensitivity comparable to that of radiologists who had been informed about clinical indications for most cancer cases (94.3% vs 96.2%, p > 0.99), and it detected 2 cases that had been missed by radiologists. INTERPRETATION: The developed DL model can accurately detect colorectal cancer and improve radiologists' detection performance, showing its potential as an effective computer-aided detection tool. FUNDING: This study was supported by National Science Fund for Distinguished Young Scholars of China (No. 81925023); Regional Innovation and Development Joint Fund of National Natural Science Foundation of China (No. U22A20345); National Natural Science Foundation of China (No. 82072090 and No. 82371954); Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011); High-level Hospital Construction Project (No. DFJHBF202105).


Asunto(s)
Neoplasias Colorrectales , Medios de Contraste , Aprendizaje Profundo , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Colorrectales/diagnóstico por imagen , Neoplasias Colorrectales/diagnóstico , Femenino , Masculino , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Persona de Mediana Edad , Anciano , Curva ROC , Adulto , Anciano de 80 o más Años
7.
Technol Health Care ; 32(S1): 125-133, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38759043

RESUMEN

BACKGROUND: Transrectal ultrasound-guided prostate biopsy is the gold standard diagnostic test for prostate cancer, but it is an invasive examination of non-targeted puncture and has a high false-negative rate. OBJECTIVE: In this study, we aimed to develop a computer-assisted prostate cancer diagnosis method based on multiparametric MRI (mpMRI) images. METHODS: We retrospectively collected 106 patients who underwent radical prostatectomy after diagnosis with prostate biopsy. mpMRI images, including T2 weighted imaging (T2WI), diffusion weighted imaging (DWI), and dynamic-contrast enhanced (DCE), and were accordingly analyzed. We extracted the region of interest (ROI) about the tumor and benign area on the three sequential MRI axial images at the same level. The ROI data of 433 mpMRI images were obtained, of which 202 were benign and 231 were malignant. Of those, 50 benign and 50 malignant images were used for training, and the 333 images were used for verification. Five main feature groups, including histogram, GLCM, GLGCM, wavelet-based multi-fractional Brownian motion features and Minkowski function features, were extracted from the mpMRI images. The selected characteristic parameters were analyzed by MATLAB software, and three analysis methods with higher accuracy were selected. RESULTS: Through prostate cancer identification based on mpMRI images, we found that the system uses 58 texture features and 3 classification algorithms, including Support Vector Machine (SVM), K-nearest Neighbor (KNN), and Ensemble Learning (EL), performed well. In the T2WI-based classification results, the SVM achieved the optimal accuracy and AUC values of 64.3% and 0.67. In the DCE-based classification results, the SVM achieved the optimal accuracy and AUC values of 72.2% and 0.77. In the DWI-based classification results, the ensemble learning achieved optimal accuracy as well as AUC values of 75.1% and 0.82. In the classification results based on all data combinations, the SVM achieved the optimal accuracy and AUC values of 66.4% and 0.73. CONCLUSION: The proposed computer-aided diagnosis system provides a good assessment of the diagnosis of the prostate cancer, which may reduce the burden of radiologists and improve the early diagnosis of prostate cancer.


Asunto(s)
Diagnóstico por Computador , Neoplasias de la Próstata , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Neoplasias de la Próstata/diagnóstico , Estudios Retrospectivos , Persona de Mediana Edad , Anciano , Diagnóstico por Computador/métodos , Detección Precoz del Cáncer/métodos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Imagen por Resonancia Magnética/métodos
8.
Clin Chest Med ; 45(2): 249-261, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38816086

RESUMEN

Early detection with accurate classification of solid pulmonary nodules is critical in reducing lung cancer morbidity and mortality. Computed tomography (CT) remains the most widely used imaging examination for pulmonary nodule evaluation; however, other imaging modalities, such as PET/CT and MRI, are increasingly used for nodule characterization. Current advances in solid nodule imaging are largely due to developments in machine learning, including automated nodule segmentation and computer-aided detection. This review explores current multi-modality solid pulmonary nodule detection and characterization with discussion of radiomics and risk prediction models.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/patología , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones , Imagen por Resonancia Magnética , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Detección Precoz del Cáncer/métodos
9.
Cureus ; 16(4): e58400, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38756258

RESUMEN

Artificial intelligence (AI) has the ability to completely transform the healthcare industry by enhancing diagnosis, treatment, and resource allocation. To ensure patient safety and equitable access to healthcare, it also presents ethical and practical issues that need to be carefully addressed. Its integration into healthcare is a crucial topic. To realize its full potential, however, the ethical issues around data privacy, prejudice, and transparency, as well as the practical difficulties posed by workforce adaptability and statutory frameworks, must be addressed. While there is growing knowledge about the advantages of AI in healthcare, there is a significant lack of knowledge about the moral and practical issues that come with its application, particularly in the setting of emergency and critical care. The majority of current research tends to concentrate on the benefits of AI, but thorough studies that investigate the potential disadvantages and ethical issues are scarce. The purpose of our article is to identify and examine the ethical and practical difficulties that arise when implementing AI in emergency medicine and critical care, to provide solutions to these issues, and to give suggestions to healthcare professionals and policymakers. In order to responsibly and successfully integrate AI in these important healthcare domains, policymakers and healthcare professionals must collaborate to create strong regulatory frameworks, safeguard data privacy, remove prejudice, and give healthcare workers the necessary training.

10.
Phys Med ; 121: 103344, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38593627

RESUMEN

PURPOSE: To validate the performance of computer-aided detection (CAD) and volumetry software using an anthropomorphic phantom with a ground truth (GT) set of 3D-printed nodules. METHODS: The Kyoto Kaguku Lungman phantom, containing 3D-printed solid nodules including six diameters (4 to 9 mm) and three morphologies (smooth, lobulated, spiculated), was scanned at varying CTDIvol levels (6.04, 1.54 and 0.20 mGy). Combinations of reconstruction algorithms (iterative and deep learning image reconstruction) and kernels (soft and hard) were applied. Detection, volumetry and density results recorded by a commercially available AI-based algorithm (AVIEW LCS + ) were compared to the absolute GT, which was determined through µCT scanning at 50 µm resolution. The associations between image acquisition parameters or nodule characteristics and accuracy of nodule detection and characterization were analyzed with chi square tests and multiple linear regression. RESULTS: High levels of detection sensitivity and precision (minimal 83 % and 91 % respectively) were observed across all acquisitions. Neither reconstruction algorithm nor radiation dose showed significant associations with detection. Nodule diameter however showed a highly significant association with detection (p < 0.0001). Volumetric measurements for nodules > 6 mm were accurate within 10 % absolute range from volumeGT, regardless of dose and reconstruction. Nodule diameter and morphology are major determinants of volumetric accuracy (p < 0.001). Density assignment was not significantly influenced by any parameters. CONCLUSIONS: Our study confirms the software's accurate performance in nodule volumetry, detection and density characterization with robustness for variations in CT imaging protocols. This study suggests the incorporation of similar phantom setups in quality assurance of CAD tools.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Dosis de Radiación , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Humanos , Impresión Tridimensional , Programas Informáticos
11.
Radiol Artif Intell ; 6(3): e230318, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38568095

RESUMEN

Purpose To develop an artificial intelligence (AI) model for the diagnosis of breast cancer on digital breast tomosynthesis (DBT) images and to investigate whether it could improve diagnostic accuracy and reduce radiologist reading time. Materials and Methods A deep learning AI algorithm was developed and validated for DBT with retrospectively collected examinations (January 2010 to December 2021) from 14 institutions in the United States and South Korea. A multicenter reader study was performed to compare the performance of 15 radiologists (seven breast specialists, eight general radiologists) in interpreting DBT examinations in 258 women (mean age, 56 years ± 13.41 [SD]), including 65 cancer cases, with and without the use of AI. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were evaluated. Results The AUC for stand-alone AI performance was 0.93 (95% CI: 0.92, 0.94). With AI, radiologists' AUC improved from 0.90 (95% CI: 0.86, 0.93) to 0.92 (95% CI: 0.88, 0.96) (P = .003) in the reader study. AI showed higher specificity (89.64% [95% CI: 85.34%, 93.94%]) than radiologists (77.34% [95% CI: 75.82%, 78.87%]) (P < .001). When reading with AI, radiologists' sensitivity increased from 85.44% (95% CI: 83.22%, 87.65%) to 87.69% (95% CI: 85.63%, 89.75%) (P = .04), with no evidence of a difference in specificity. Reading time decreased from 54.41 seconds (95% CI: 52.56, 56.27) without AI to 48.52 seconds (95% CI: 46.79, 50.25) with AI (P < .001). Interreader agreement measured by Fleiss κ increased from 0.59 to 0.62. Conclusion The AI model showed better diagnostic accuracy than radiologists in breast cancer detection, as well as reduced reading times. The concurrent use of AI in DBT interpretation could improve both accuracy and efficiency. Keywords: Breast, Computer-Aided Diagnosis (CAD), Tomosynthesis, Artificial Intelligence, Digital Breast Tomosynthesis, Breast Cancer, Computer-Aided Detection, Screening Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Bae in this issue.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Mama , Mamografía , Sensibilidad y Especificidad , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Persona de Mediana Edad , Mamografía/métodos , Estudios Retrospectivos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , República de Corea/epidemiología , Aprendizaje Profundo , Adulto , Factores de Tiempo , Algoritmos , Estados Unidos , Reproducibilidad de los Resultados
12.
Artículo en Inglés | MEDLINE | ID: mdl-38632166

RESUMEN

PURPOSE: Intracranial aneurysm detection from 3D Time-Of-Flight Magnetic Resonance Angiography images is a problem of increasing clinical importance. Recently, a streak of methods have shown promising performance by using segmentation neural networks. However, these methods may be less relevant in a clinical settings where diagnostic decisions rely on detecting objects rather than their segmentation. METHODS: We introduce a 3D single-stage object detection method tailored for small object detection such as aneurysms. Our anchor-free method incorporates fast data annotation, adapted data sampling and generation to address class imbalance problem, and spherical representations for improved detection. RESULTS: A comprehensive evaluation was conducted, comparing our method with the state-of-the-art SCPM-Net, nnDetection and nnUNet baselines, using two datasets comprising 402 subjects. The evaluation used adapted object detection metrics. Our method exhibited comparable or superior performance, with an average precision of 78.96%, sensitivity of 86.78%, and 0.53 false positives per case. CONCLUSION: Our method significantly reduces the detection complexity compared to existing methods and highlights the advantages of object detection over segmentation-based approaches for aneurysm detection. It also holds potential for application to other small object detection problems.

13.
Artículo en Inglés | MEDLINE | ID: mdl-38625446

RESUMEN

PURPOSE: The quality and bias of annotations by annotators (e.g., radiologists) affect the performance changes in computer-aided detection (CAD) software using machine learning. We hypothesized that the difference in the years of experience in image interpretation among radiologists contributes to annotation variability. In this study, we focused on how the performance of CAD software changes with retraining by incorporating cases annotated by radiologists with varying experience. METHODS: We used two types of CAD software for lung nodule detection in chest computed tomography images and cerebral aneurysm detection in magnetic resonance angiography images. Twelve radiologists with different years of experience independently annotated the lesions, and the performance changes were investigated by repeating the retraining of the CAD software twice, with the addition of cases annotated by each radiologist. Additionally, we investigated the effects of retraining using integrated annotations from multiple radiologists. RESULTS: The performance of the CAD software after retraining differed among annotating radiologists. In some cases, the performance was degraded compared to that of the initial software. Retraining using integrated annotations showed different performance trends depending on the target CAD software, notably in cerebral aneurysm detection, where the performance decreased compared to using annotations from a single radiologist. CONCLUSIONS: Although the performance of the CAD software after retraining varied among the annotating radiologists, no direct correlation with their experience was found. The performance trends differed according to the type of CAD software used when integrated annotations from multiple radiologists were used.

14.
J Imaging Inform Med ; 2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38627268

RESUMEN

Architectural distortion (AD) is one of the most common findings on mammograms, and it may represent not only cancer but also a lesion such as a radial scar that may have an associated cancer. AD accounts for 18-45% missed cancer, and the positive predictive value of AD is approximately 74.5%. Early detection of AD leads to early diagnosis and treatment of the cancer and improves the overall prognosis. However, detection of AD is a challenging task. In this work, we propose a new approach for detecting architectural distortion in mammography images by combining preprocessing methods and a novel structure fusion attention model. The proposed structure-focused weighted orientation preprocessing method is composed of the original image, the architecture enhancement map, and the weighted orientation map, highlighting suspicious AD locations. The proposed structure fusion attention model captures the information from different channels and outperforms other models in terms of false positives and top sensitivity, which refers to the maximum sensitivity that a model can achieve under the acceptance of the highest number of false positives, reaching 0.92 top sensitivity with only 0.6590 false positive per image. The findings suggest that the combination of preprocessing methods and a novel network architecture can lead to more accurate and reliable AD detection. Overall, the proposed approach offers a novel perspective on detecting ADs, and we believe that our method can be applied to clinical settings in the future, assisting radiologists in the early detection of ADs from mammography, ultimately leading to early treatment of breast cancer patients.

15.
World J Gastrointest Endosc ; 16(3): 126-135, 2024 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-38577646

RESUMEN

The number and variety of applications of artificial intelligence (AI) in gastrointestinal (GI) endoscopy is growing rapidly. New technologies based on machine learning (ML) and convolutional neural networks (CNNs) are at various stages of development and deployment to assist patients and endoscopists in preparing for endoscopic procedures, in detection, diagnosis and classification of pathology during endoscopy and in confirmation of key performance indicators. Platforms based on ML and CNNs require regulatory approval as medical devices. Interactions between humans and the technologies we use are complex and are influenced by design, behavioural and psychological elements. Due to the substantial differences between AI and prior technologies, important differences may be expected in how we interact with advice from AI technologies. Human-AI interaction (HAII) may be optimised by developing AI algorithms to minimise false positives and designing platform interfaces to maximise usability. Human factors influencing HAII may include automation bias, alarm fatigue, algorithm aversion, learning effect and deskilling. Each of these areas merits further study in the specific setting of AI applications in GI endoscopy and professional societies should engage to ensure that sufficient emphasis is placed on human-centred design in development of new AI technologies.

16.
Artículo en Inglés | MEDLINE | ID: mdl-38645463

RESUMEN

Purpose: To rule out hemorrhage, non-contrast CT (NCCT) scans are used for early evaluation of patients with suspected stroke. Recently, artificial intelligence tools have been developed to assist with determining eligibility for reperfusion therapies by automating measurement of the Alberta Stroke Program Early CT Score (ASPECTS), a 10-point scale with > 7 or ≤ 7 being a threshold for change in functional outcome prediction and higher chance of symptomatic hemorrhage, and hypodense volume. The purpose of this work was to investigate the effects of CT reconstruction kernel and slice thickness on ASPECTS and hypodense volume. Methods: The NCCT series image data of 87 patients imaged with a CT stroke protocol at our institution were reconstructed with 3 kernels (H10s-smooth, H40s-medium, H70h-sharp) and 2 slice thicknesses (1.5mm and 5mm) to create a reference condition (H40s/5mm) and 5 non-reference conditions. Each reconstruction for each patient was analyzed with the Brainomix e-Stroke software (Brainomix, Oxford, England) which yields an ASPECTS value and measure of total hypodense volume (mL). Results: An ASPECTS value was returned for 74 of 87 cases in the reference condition (13 failures). ASPECTS in non-reference conditions changed from that measured in the reference condition for 59 cases, 7 of which changed above or below the clinical threshold of 7 for 3 non-reference conditions. ANOVA tests were performed to compare the differences in protocols, Dunnett's post-hoc tests were performed after ANOVA, and a significance level of p < 0.05 was defined. There was no significant effect of kernel (p = 0.91), a significant effect of slice thickness (p < 0.01) and no significant interaction between these factors (p = 0.91). Post-hoc tests indicated no significant difference between ASPECTS estimated in the reference and any non-reference conditions. There was a significant effect of kernel (p < 0.01) and slice thickness (p < 0.01) on hypodense volume, however there was no significant interaction between these factors (p = 0.79). Post-hoc tests indicated significantly different hypodense volume measurements for H10s/1.5mm (p = 0.03), H40s/1.5mm (p < 0.01), H70h/5mm (p < 0.01). No significant difference was found in hypodense volume measured in the H10s/5mm condition (p = 0.96). Conclusion: Automated ASPECTS and hypodense volume measurements can be significantly impacted by reconstruction kernel and slice thickness.

17.
Comput Biol Med ; 172: 108240, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38460312

RESUMEN

OBJECTIVE: Neoadjuvant chemotherapy (NACT) is one kind of treatment for advanced stage ovarian cancer patients. However, due to the nature of tumor heterogeneity, the clinical outcomes to NACT vary significantly among different subgroups. Partial responses to NACT may lead to suboptimal debulking surgery, which will result in adverse prognosis. To address this clinical challenge, the purpose of this study is to develop a novel image marker to achieve high accuracy prognosis prediction of NACT at an early stage. METHODS: For this purpose, we first computed a total of 1373 radiomics features to quantify the tumor characteristics, which can be grouped into three categories: geometric, intensity, and texture features. Second, all these features were optimized by principal component analysis algorithm to generate a compact and informative feature cluster. This cluster was used as input for developing and optimizing support vector machine (SVM) based classifiers, which indicated the likelihood of receiving suboptimal cytoreduction after the NACT treatment. Two different kernels for SVM algorithm were explored and compared. A total of 42 ovarian cancer cases were retrospectively collected to validate the scheme. A nested leave-one-out cross-validation framework was adopted for model performance assessment. RESULTS: The results demonstrated that the model with a Gaussian radial basis function kernel SVM yielded an AUC (area under the ROC [receiver characteristic operation] curve) of 0.806 ± 0.078. Meanwhile, this model achieved overall accuracy (ACC) of 83.3%, positive predictive value (PPV) of 81.8%, and negative predictive value (NPV) of 83.9%. CONCLUSION: This study provides meaningful information for the development of radiomics based image markers in NACT treatment outcome prediction.


Asunto(s)
Terapia Neoadyuvante , Neoplasias Ováricas , Humanos , Femenino , Estudios Retrospectivos , Neoplasias Ováricas/diagnóstico por imagen , Neoplasias Ováricas/tratamiento farmacológico , Neoplasias Ováricas/cirugía , Carcinoma Epitelial de Ovario/tratamiento farmacológico , Carcinoma Epitelial de Ovario/cirugía , Valor Predictivo de las Pruebas
18.
Cancer Biomark ; 40(1): 1-25, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38517775

RESUMEN

BACKGROUND: Breast cancer is one of the leading causes of death in women worldwide. Histopathology analysis of breast tissue is an essential tool for diagnosing and staging breast cancer. In recent years, there has been a significant increase in research exploring the use of deep-learning approaches for breast cancer detection from histopathology images. OBJECTIVE: To provide an overview of the current state-of-the-art technologies in automated breast cancer detection in histopathology images using deep learning techniques. METHODS: This review focuses on the use of deep learning algorithms for the detection and classification of breast cancer from histopathology images. We provide an overview of publicly available histopathology image datasets for breast cancer detection. We also highlight the strengths and weaknesses of these architectures and their performance on different histopathology image datasets. Finally, we discuss the challenges associated with using deep learning techniques for breast cancer detection, including the need for large and diverse datasets and the interpretability of deep learning models. RESULTS: Deep learning techniques have shown great promise in accurately detecting and classifying breast cancer from histopathology images. Although the accuracy levels vary depending on the specific data set, image pre-processing techniques, and deep learning architecture used, these results highlight the potential of deep learning algorithms in improving the accuracy and efficiency of breast cancer detection from histopathology images. CONCLUSION: This review has presented a thorough account of the current state-of-the-art techniques for detecting breast cancer using histopathology images. The integration of machine learning and deep learning algorithms has demonstrated promising results in accurately identifying breast cancer from histopathology images. The insights gathered from this review can act as a valuable reference for researchers in this field who are developing diagnostic strategies using histopathology images. Overall, the objective of this review is to spark interest among scholars in this complex field and acquaint them with cutting-edge technologies in breast cancer detection using histopathology images.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Humanos , Neoplasias de la Mama/patología , Neoplasias de la Mama/diagnóstico , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos
19.
Sci Rep ; 14(1): 6290, 2024 03 15.
Artículo en Inglés | MEDLINE | ID: mdl-38491186

RESUMEN

BC (Breast cancer) is the second most common reason for women to die from cancer. Recent workintroduced a model for BC classifications where input breast images were pre-processed using median filters for reducing noises. Weighed KMC (K-Means clustering) is used to segment the ROI (Region of Interest) after the input image has been cleaned of noise. Block-based CDF (Centre Distance Function) and CDTM (Diagonal Texture Matrix)-based texture and shape descriptors are utilized for feature extraction. The collected features are reduced in counts using KPCA (Kernel Principal Component Analysis). The appropriate feature selection is computed using ICSO (Improved Cuckoo Search Optimization). The MRNN ((Modified Recurrent Neural Network)) values are then improved through optimization before being utilized to divide British Columbia into benign and malignant types. However, ICSO has many disadvantages, such as slow search speed and low convergence accuracy and training an MRNN is a completely tough task. To avoid those problems in this work preprocessing is done by bilateral filtering to remove the noise from the input image. Bilateral filter using linear Gaussian for smoothing. Contrast stretching is applied to improve the image quality. ROI segmentation is calculated based on MFCM (modified fuzzy C means) clustering. CDTM-based, CDF-based color histogram and shape description methods are applied for feature extraction. It summarizes two important pieces of information about an object such as the colors present in the image, and the relative proportion of each color in the given image. After the features are extracted, KPCA is used to reduce the size. Feature selection was performed using MCSO (Mutational Chicken Flock Optimization). Finally, BC detection and classification were performed using FCNN (Fuzzy Convolutional Neural Network) and its parameters were optimized using MCSO. The proposed model is evaluated for accuracy, recall, f-measure and accuracy. This work's experimental results achieve high values of accuracy when compared to other existing models.


Asunto(s)
Algoritmos , Neoplasias de la Mama , Femenino , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Colombia Británica
20.
Cancer Imaging ; 24(1): 40, 2024 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-38509635

RESUMEN

BACKGROUND: Low-dose computed tomography (LDCT) has been shown useful in early lung cancer detection. This study aimed to develop a novel deep learning model for detecting pulmonary nodules on chest LDCT images. METHODS: In this secondary analysis, three lung nodule datasets, including Lung Nodule Analysis 2016 (LUNA16), Lung Nodule Received Operation (LNOP), and Lung Nodule in Health Examination (LNHE), were used to train and test deep learning models. The 3D region proposal network (RPN) was modified via a series of pruning experiments for better predictive performance. The performance of each modified deep leaning model was evaluated based on sensitivity and competition performance metric (CPM). Furthermore, the performance of the modified 3D RPN trained on three datasets was evaluated by 10-fold cross validation. Temporal validation was conducted to assess the reliability of the modified 3D RPN for detecting lung nodules. RESULTS: The results of pruning experiments indicated that the modified 3D RPN composed of the Cross Stage Partial Network (CSPNet) approach to Residual Network (ResNet) Xt (CSP-ResNeXt) module, feature pyramid network (FPN), nearest anchor method, and post-processing masking, had the optimal predictive performance with a CPM of 92.2%. The modified 3D RPN trained on the LUNA16 dataset had the highest CPM (90.1%), followed by the LNOP dataset (CPM: 74.1%) and the LNHE dataset (CPM: 70.2%). When the modified 3D RPN trained and tested on the same datasets, the sensitivities were 94.6%, 84.8%, and 79.7% for LUNA16, LNOP, and LNHE, respectively. The temporal validation analysis revealed that the modified 3D RPN tested on LNOP test set achieved a CPM of 71.6% and a sensitivity of 85.7%, and the modified 3D RPN tested on LNHE test set had a CPM of 71.7% and a sensitivity of 83.5%. CONCLUSION: A modified 3D RPN for detecting lung nodules on LDCT scans was designed and validated, which may serve as a computer-aided diagnosis system to facilitate lung nodule detection and lung cancer diagnosis.


A modified 3D RPN for detecting lung nodules on CT images that exhibited greater sensitivity and CPM than did several previously reported CAD detection models was established.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Humanos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Reproducibilidad de los Resultados , Imagenología Tridimensional/métodos , Pulmón , Tomografía Computarizada por Rayos X/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA