Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 501
Filtrar
1.
Biomed Eng Comput Biol ; 15: 11795972241288319, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39372969

RESUMO

Objective: The aim is to detect impacted teeth in panoramic radiology by refining the pretrained MedSAM model. Study design: Impacted teeth are dental issues that can cause complications and are diagnosed via radiographs. We modified SAM model for individual tooth segmentation using 1016 X-ray images. The dataset was split into training, validation, and testing sets, with a ratio of 16:3:1. We enhanced the SAM model to automatically detect impacted teeth by focusing on the tooth's center for more accurate results. Results: With 200 epochs, batch size equals to 1, and a learning rate of 0.001, random images trained the model. Results on the test set showcased performance up to an accuracy of 86.73%, F1-score of 0.5350, and IoU of 0.3652 on SAM-related models. Conclusion: This study fine-tunes MedSAM for impacted tooth segmentation in X-ray images, aiding dental diagnoses. Further improvements on model accuracy and selection are essential for enhancing dental practitioners' diagnostic capabilities.

2.
Cogn Res Princ Implic ; 9(1): 59, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39218972

RESUMO

Computer Aided Detection (CAD) has been used to help readers find cancers in mammograms. Although these automated systems have been shown to help cancer detection when accurate, the presence of CAD also leads to an over-reliance effect where miss errors and false alarms increase when the CAD system fails. Previous research investigated CAD systems which overlayed salient exogenous cues onto the image to highlight suspicious areas. These salient cues capture attention which may exacerbate the over-reliance effect. Furthermore, overlaying CAD cues directly on the mammogram occludes sections of breast tissue which may disrupt global statistics useful for cancer detection. In this study we investigated whether an over-reliance effect occurred with a binary CAD system, which instead of overlaying a CAD cue onto the mammogram, reported a message alongside the mammogram indicating the possible presence of a cancer. We manipulated the certainty of the message and whether it was presented only to indicate the presence of a cancer, or whether a message was displayed on every mammogram to state whether a cancer was present or absent. The results showed that although an over-reliance effect still occurred with binary CAD systems miss errors were reduced when the CAD message was more definitive and only presented to alert readers of a possible cancer.


Assuntos
Neoplasias da Mama , Mamografia , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Pessoa de Meia-Idade , Diagnóstico por Computador , Adulto , Idoso , Sinais (Psicologia) , Detecção Precoce de Câncer
3.
Neuroradiology ; 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39230716

RESUMO

PURPOSE: The aim of our study was to assess the diagnostic performance of commercially available AI software for intracranial aneurysm detection and to determine if the AI system enhances the radiologist's accuracy in identifying aneurysms and reduces image analysis time. METHODS: TOF-MRA clinical brain examinations were analyzed using commercially available software and by an consultant neuroradiologist for the presence of intracranial aneurysms. The results were compared with the reference standard, to measure the sensitivity and specificity of the software and the consultant neuroradiologist. Furthermore, we examined the time required for the neuroradiologist to analyze the TOF-MRA image set, both with and without use of the AI software. RESULTS: In 500 TOF-MRI brain studies, 106 aneurysms were detected in 85 examinations by combining AI software with neuroradiologist readings. The neuroradiologist identified 98 aneurysms (92.5% sensitivity), while AI detected 77 aneurysms (72.6% sensitivity). Specificity and sensitivity were calculated from the combined effort as reference. Combining AI and neuroradiologist readings significantly improves detection reliability. Additionally, AI integration reduced TOF-MRA analysis time by 19 s (23% reduction). CONCLUSIONS: Our findings indicate that the AI-based software can support neuroradiologists in interpreting brain TOF-MRA. A combined reading of the AI-based software and the neuroradiologist demonstrated higher reliability in identifying intracranial aneurysms as compared to reading by either neuroradiologist or software, thus improving diagnostic accuracy of the neuroradiologist. Simultaneously, reading time for the neuroradiologist was reduced by approximately one quarter.

4.
JGH Open ; 8(9): e70018, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39253018

RESUMO

Background and Aims: The utilization of artificial intelligence (AI) with computer-aided detection (CADe) has the potential to increase the adenoma detection rate (ADR) by up to 30% in expert settings and specialized centers. The impact of CADe on serrated polyp detection rates (SDR) and academic trainees ADR & SDR remains underexplored. We aim to investigate the effect of CADe on ADR and SDR at an academic center with various levels of providers' experience. Methods: A single-center retrospective analysis was conducted on asymptomatic patients between the ages of 45 and 75 who underwent screening colonoscopy. Colonoscopy reports were reviewed for 3 months prior to the introduction of GI Genius™ (Medtronic, USA) and 3 months after its implementation. The primary outcome was ADR and SDR with and without CADe. Results: Totally 658 colonoscopies were eligible for analysis. CADe resulted in statistically significant improvement in SDR from 8.92% to 14.1% (P = 0.037). The (ADR + SDR) with CADe and without CADe was 58% and 55.1%, respectively (P = 0.46). Average colonoscopy (CSC) withdrawal time was 17.33 min (SD 10) with the device compared with 17.35 min (SD 9) without the device (P = 0.98). Conclusion: In this study, GI Genius™ was associated with a statistically significant increase in SDR alone, but not in ADR or (ADR + SDR), likely secondary to the more elusive nature of serrated polyps compared to adenomatous polyps. The use of CADe did not affect withdrawal time.

5.
Dig Dis Sci ; 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39285090

RESUMO

BACKGROUND: Artificial intelligence (AI) has emerged as a promising tool for detecting and characterizing colorectal polyps during colonoscopy, offering potential enhancements in traditional colonoscopy procedures to improve outcomes in patients with inadequate bowel preparation. AIMS: This study aimed to assess the impact of an AI tool on computer-aided detection (CADe) assistance during colonoscopy in this population. METHODS: This case-control study utilized propensity score matching (PSM) for age, sex, race, and colonoscopy indication to analyze a database of patients who underwent colonoscopy at a single tertiary referral center between 2017 and 2023. Patients were excluded if the procedure was incomplete or aborted owing to poor preparation. The patients were categorized based on the use of AI during colonoscopy. Data on patient demographics and colonoscopy performance metrics were collected. Univariate and multivariate logistic regression models were used to compare the groups. RESULTS: After PSM patients with adequately prepped colonoscopies (n = 1466), the likelihood of detecting hyperplastic polyps (OR = 2.0, 95%CI 1.7-2.5, p < 0.001), adenomas (OR = 1.47, 95%CI 1.19-1.81, p < 0.001), and sessile serrated polyps (OR = 1.90, 95%CI 1.20-3.03, p = 0.007) significantly increased with the inclusion of CADe. In inadequately prepped patients (n = 160), CADe exhibited a more pronounced impact on the polyp detection rate (OR = 4.34, 95%CI 1.6-6.16, p = 0.049) and adenomas (OR = 2.9, 95%CI 2.20-8.57, p < 0.001), with a marginal increase in withdrawal and procedure times. CONCLUSION: This study highlights the significant improvement in detecting diminutive polyps (< 5 mm) and sessile polyps using CADe, although notably, this benefit was only seen in patients with adequate bowel preparation. In conclusion, the integration of AI in colonoscopy, driven by artificial intelligence, promises to significantly enhance lesion detection and diagnosis, revolutionize the procedure's effectiveness, and improve patient outcomes.

6.
Sci Rep ; 14(1): 20711, 2024 09 05.
Artigo em Inglês | MEDLINE | ID: mdl-39237689

RESUMO

Tuberculosis (TB) is the leading cause of mortality among infectious diseases globally. Effectively managing TB requires early identification of individuals with TB disease. Resource-constrained settings often lack skilled professionals for interpreting chest X-rays (CXRs) used in TB diagnosis. To address this challenge, we developed "DecXpert" a novel Computer-Aided Detection (CAD) software solution based on deep neural networks for early TB diagnosis from CXRs, aiming to detect subtle abnormalities that may be overlooked by human interpretation alone. This study was conducted on the largest cohort size to date, where the performance of a CAD software (DecXpert version 1.4) was validated against the gold standard molecular diagnostic technique, GeneXpert MTB/RIF, analyzing data from 4363 individuals across 12 primary health care centers and one tertiary hospital in North India. DecXpert demonstrated 88% sensitivity (95% CI 0.85-0.93) and 85% specificity (95% CI 0.82-0.91) for active TB detection. Incorporating demographics, DecXpert achieved an area under the curve of 0.91 (95% CI 0.88-0.94), indicating robust diagnostic performance. Our findings establish DecXpert's potential as an accurate, efficient AI solution for early identification of active TB cases. Deployed as a screening tool in resource-limited settings, DecXpert could enable early identification of individuals with TB disease and facilitate effective TB management where skilled radiological interpretation is limited.


Assuntos
Software , Humanos , Índia/epidemiologia , Feminino , Masculino , Adulto , Pessoa de Meia-Idade , Diagnóstico por Computador/métodos , Tuberculose/diagnóstico , Tuberculose/diagnóstico por imagem , Tuberculose Pulmonar/diagnóstico por imagem , Tuberculose Pulmonar/diagnóstico , Sensibilidade e Especificidade , Adulto Jovem , Adolescente , Radiografia Torácica/métodos , Idoso
7.
JMIR Form Res ; 8: e55641, 2024 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-39167435

RESUMO

BACKGROUND: Artificial intelligence (AI) based computer-aided detection devices are recommended for screening and triaging of pulmonary tuberculosis (TB) using digital chest x-ray (CXR) images (soft copies). Most AI algorithms are trained using input data from digital CXR Digital Imaging and Communications in Medicine (DICOM) files. There can be scenarios when only digital CXR films (hard copies) are available for interpretation. A smartphone-captured photo of the digital CXR film may be used for AI to process in such a scenario. There is a gap in the literature investigating if there is a significant difference in the performance of AI algorithms when digital CXR DICOM files are used as input for AI to process as opposed to photos of the digital CXR films being used as input. OBJECTIVE: The primary objective was to compare the agreement of AI in detecting radiological signs of TB when using DICOM files (denoted as CXRd) as input versus when using smartphone-captured photos of digital CXR films (denoted as CXRp) with human readers. METHODS: Pairs of CXRd and CXRp images were obtained retrospectively from patients screened for TB. AI results were obtained using both the CXRd and CXRp files. The majority consensus on the presence or absence of TB in CXR pairs was obtained from a panel of 3 independent radiologists. The positive and negative percent agreement of AI in detecting radiological signs of TB in CXRd and CXRp were estimated by comparing with the majority consensus. The distribution of AI probability scores was also compared. RESULTS: A total of 1278 CXR pairs were analyzed. The positive percent agreement of AI was found to be 92.22% (95% CI 89.94-94.12) and 90.75% (95% CI 88.32-92.82), respectively, for CXRd and CXRp images (P=.09). The negative percent agreement of AI was 82.08% (95% CI 78.76-85.07) and 79.23% (95% CI 75.75-82.42), respectively, for CXRd and CXRp images (P=.06). The median of the AI probability score was 0.72 (IQR 0.11-0.97) in CXRd and 0.72 (IQR 0.14-0.96) in CXRp images (P=.75). CONCLUSIONS: We did not observe any statistically significant differences in the output of AI in digital CXRs and photos of digital CXR films.

8.
Dig Dis ; : 1-9, 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39102801

RESUMO

INTRODUCTION: Esophagogastroduodenoscopy is the most important tool to detect gastric cancer (GC). In this study, we developed a computer-aided detection (CADe) system to detect GC with white light imaging (WLI) and linked color imaging (LCI) modes and aimed to compare the performance of CADe with that of endoscopists. METHODS: The system was developed based on the deep learning framework from 9,021 images in 385 patients between 2017 and 2020. A total of 116 LCI and WLI videos from 110 patients between 2017 and 2023 were used to evaluate per-case sensitivity and per-frame specificity. RESULTS: The per-case sensitivity and per-frame specificity of CADe with a confidence level of 0.5 in detecting GC were 78.6% and 93.4% for WLI and 94.0% and 93.3% for LCI, respectively (p < 0.001). The per-case sensitivities of nonexpert endoscopists for WLI and LCI were 45.8% and 80.4%, whereas those of expert endoscopists were 66.7% and 90.6%, respectively. Regarding detectability between CADe and endoscopists, the per-case sensitivities for WLI and LCI were 78.6% and 94.0% in CADe, respectively, which were significantly higher than those for LCI in experts (90.6%, p = 0.004) and those for WLI and LCI in nonexperts (45.8% and 80.4%, respectively, p < 0.001); however, no significant difference for WLI was observed between CADe and experts (p = 0.134). CONCLUSIONS: Our CADe system showed significantly better sensitivity in detecting GC when used in LCI compared with WLI mode. Moreover, the sensitivity of CADe using LCI is significantly higher than those of expert endoscopists using LCI to detect GC.

9.
Radiol Artif Intell ; 6(5): e230391, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39140867

RESUMO

Purpose To develop a deep learning algorithm that uses temporal information to improve the performance of a previously published framework of cancer lesion detection for digital breast tomosynthesis. Materials and Methods This retrospective study analyzed the current and the 1-year-prior Hologic digital breast tomosynthesis screening examinations from eight different institutions between 2016 and 2020. The dataset contained 973 cancer and 7123 noncancer cases. The front end of this algorithm was an existing deep learning framework that performed single-view lesion detection followed by ipsilateral view matching. For this study, PriorNet was implemented as a cascaded deep learning module that used the additional growth information to refine the final probability of malignancy. Data from seven of the eight sites were used for training and validation, while the eighth site was reserved for external testing. Model performance was evaluated using localization receiver operating characteristic curves. Results On the validation set, PriorNet showed an area under the receiver operating characteristic curve (AUC) of 0.931 (95% CI: 0.930, 0.931), which outperformed both baseline models using single-view detection (AUC, 0.892 [95% CI: 0.891, 0.892]; P < .001) and ipsilateral matching (AUC, 0.915 [95% CI: 0.914, 0.915]; P < .001). On the external test set, PriorNet achieved an AUC of 0.896 (95% CI: 0.885, 0.896), outperforming both baselines (AUC, 0.846 [95% CI: 0.846, 0.847]; P < .001 and AUC, 0.865 [95% CI: 0.865, 0.866]; P < .001, respectively). In the high sensitivity range of 0.9 to 1.0, the partial AUC of PriorNet was significantly higher (P < .001) relative to both baselines. Conclusion PriorNet using temporal information further improved the breast cancer detection performance of an existing digital breast tomosynthesis cancer detection framework. Keywords: Digital Breast Tomosynthesis, Computer-aided Detection, Breast Cancer, Deep Learning © RSNA, 2024 See also commentary by Lee in this issue.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Mamografia , Humanos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico , Feminino , Mamografia/métodos , Estudos Retrospectivos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Pessoa de Meia-Idade
10.
Clin Infect Dis ; 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39190813

RESUMO

BACKGROUND: To improve tuberculosis case-finding, rapid, non-sputum triage tests need to be developed according to the World Health Organization target product profile (TPP) (>90% sensitivity, >70% specificity). We prospectively evaluated and compared artificial intelligence-based, computer-aided detection software, CAD4TBv7, and C-reactive protein assay (CRP) as triage tests at health facilities in Lesotho and South Africa. METHODS: Adults (≥18 years) presenting with ≥1 of the 4 cardinal tuberculosis symptoms were consecutively recruited between February 2021 and April 2022. After informed consent, each participant underwent a digital chest X-ray for CAD4TBv7 and a CRP test. Participants provided 1 sputum sample for Xpert MTB/RIF Ultra and Xpert MTB/RIF and 1 for liquid culture. Additionally, an expert radiologist read the chest X-rays via teleradiology. For primary analysis, a composite microbiological reference standard (ie, positive culture or Xpert Ultra) was used. RESULTS: We enrolled 1392 participants, 48% were people with HIV and 24% had previously tuberculosis. The receiver operating characteristic curve for CAD4TBv7 and CRP showed an area under the curve of .87 (95% CI: .84-.91) and .80 (95% CI: .76-.84), respectively. At thresholds corresponding to 90% sensitivity, specificity was 68.2% (95% CI: 65.4-71.0%) and 38.2% (95% CI: 35.3-41.1%) for CAD4TBv7 and CRP, respectively. CAD4TBv7 detected tuberculosis as well as an expert radiologist. CAD4TBv7 almost met the TPP criteria for tuberculosis triage. CONCLUSIONS: CAD4TBv7 is accurate as a triage test for patients with tuberculosis symptoms from areas with a high tuberculosis and HIV burden. The role of CRP in tuberculosis triage requires further research. CLINICAL TRIALS REGISTRATION: Clinicaltrials.gov identifier: NCT04666311.

11.
Radiol Phys Technol ; 17(3): 725-738, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39048847

RESUMO

In this study, we investigated the application of distributed learning, including federated learning and cyclical weight transfer, in the development of computer-aided detection (CADe) software for (1) cerebral aneurysm detection in magnetic resonance (MR) angiography images and (2) brain metastasis detection in brain contrast-enhanced MR images. We used datasets collected from various institutions, scanner vendors, and magnetic field strengths for each target CADe software. We compared the performance of multiple strategies, including a centralized strategy, in which software development is conducted at a development institution after collecting de-identified data from multiple institutions. Our results showed that the performance of CADe software trained through distributed learning was equal to or better than that trained through the centralized strategy. However, the distributed learning strategies that achieved the highest performance depend on the target CADe software. Hence, distributed learning can become one of the strategies for CADe software development using data collected from multiple institutions.


Assuntos
Aneurisma Intracraniano , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Aneurisma Intracraniano/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Software , Neoplasias Encefálicas/diagnóstico por imagem , Cabeça/diagnóstico por imagem , Aprendizado de Máquina , Automação
12.
Phys Med ; 124: 103433, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39002423

RESUMO

PURPOSE: Early detection of breast cancer has a significant effect on reducing its mortality rate. For this purpose, automated three-dimensional breast ultrasound (3-D ABUS) has been recently used alongside mammography. The 3-D volume produced in this imaging system includes many slices. The radiologist must review all the slices to find the mass, a time-consuming task with a high probability of mistakes. Therefore, many computer-aided detection (CADe) systems have been developed to assist radiologists in this task. In this paper, we propose a novel CADe system for mass detection in 3-D ABUS images. METHODS: The proposed system includes two cascaded convolutional neural networks. The goal of the first network is to achieve the highest possible sensitivity, and the second network's goal is to reduce false positives while maintaining high sensitivity. In both networks, an improved version of 3-D U-Net architecture is utilized in which two types of modified Inception modules are used in the encoder section. In the second network, new attention units are also added to the skip connections that receive the results of the first network as saliency maps. RESULTS: The system was evaluated on a dataset containing 60 3-D ABUS volumes from 43 patients and 55 masses. A sensitivity of 91.48% and a mean false positive of 8.85 per patient were achieved. CONCLUSIONS: The suggested mass detection system is fully automatic without any user interaction. The results indicate that the sensitivity and the mean FP per patient of the CADe system outperform competing techniques.


Assuntos
Neoplasias da Mama , Imageamento Tridimensional , Redes Neurais de Computação , Ultrassonografia Mamária , Humanos , Imageamento Tridimensional/métodos , Neoplasias da Mama/diagnóstico por imagem , Ultrassonografia Mamária/métodos , Ultrassonografia Mamária/instrumentação , Feminino , Automação
13.
Cancers (Basel) ; 16(13)2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-39001465

RESUMO

The early detection of pancreatic ductal adenocarcinoma (PDAC) is essential for optimal treatment of pancreatic cancer patients. We propose a tumor detection framework to improve the detection of pancreatic head tumors on CT scans. In this retrospective research study, CT images of 99 patients with pancreatic head cancer and 98 control cases from the Catharina Hospital Eindhoven were collected. A multi-stage 3D U-Net-based approach was used for PDAC detection including clinically significant secondary features such as pancreatic duct and common bile duct dilation. The developed algorithm was evaluated using a local test set comprising 59 CT scans. The model was externally validated in 28 pancreatic cancer cases of a publicly available medical decathlon dataset. The tumor detection framework achieved a sensitivity of 0.97 and a specificity of 1.00, with an area under the receiver operating curve (AUROC) of 0.99, in detecting pancreatic head cancer in the local test set. In the external test set, we obtained similar results, with a sensitivity of 1.00. The model provided the tumor location with acceptable accuracy obtaining a DICE Similarity Coefficient (DSC) of 0.37. This study shows that a tumor detection framework utilizing CT scans and secondary signs of pancreatic cancer can detect pancreatic tumors with high accuracy.

14.
J Med Imaging (Bellingham) ; 11(4): 045501, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38988989

RESUMO

Purpose: Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors. Approach: Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC). Results: The CNN-CADe improved the 3D search for the small microcalcification signal ( Δ AUC = 0.098 , p = 0.0002 ) and the 2D search for the large mass signal ( Δ AUC = 0.076 , p = 0.002 ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( Δ Δ AUC = 0.066 , p = 0.035 ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( r = - 0.528 , p = 0.036 ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( Δ Δ AUC = 0.033 , p = 0.133 ). Conclusion: The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.

15.
EBioMedicine ; 104: 105183, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38848616

RESUMO

BACKGROUND: Contrast-enhanced CT scans provide a means to detect unsuspected colorectal cancer. However, colorectal cancers in contrast-enhanced CT without bowel preparation may elude detection by radiologists. We aimed to develop a deep learning (DL) model for accurate detection of colorectal cancer, and evaluate whether it could improve the detection performance of radiologists. METHODS: We developed a DL model using a manually annotated dataset (1196 cancer vs 1034 normal). The DL model was tested using an internal test set (98 vs 115), two external test sets (202 vs 265 in 1, and 252 vs 481 in 2), and a real-world test set (53 vs 1524). We compared the detection performance of the DL model with radiologists, and evaluated its capacity to enhance radiologists' detection performance. FINDINGS: In the four test sets, the DL model had the area under the receiver operating characteristic curves (AUCs) ranging between 0.957 and 0.994. In both the internal test set and external test set 1, the DL model yielded higher accuracy than that of radiologists (97.2% vs 86.0%, p < 0.0001; 94.9% vs 85.3%, p < 0.0001), and significantly improved the accuracy of radiologists (93.4% vs 86.0%, p < 0.0001; 93.6% vs 85.3%, p < 0.0001). In the real-world test set, the DL model delivered sensitivity comparable to that of radiologists who had been informed about clinical indications for most cancer cases (94.3% vs 96.2%, p > 0.99), and it detected 2 cases that had been missed by radiologists. INTERPRETATION: The developed DL model can accurately detect colorectal cancer and improve radiologists' detection performance, showing its potential as an effective computer-aided detection tool. FUNDING: This study was supported by National Science Fund for Distinguished Young Scholars of China (No. 81925023); Regional Innovation and Development Joint Fund of National Natural Science Foundation of China (No. U22A20345); National Natural Science Foundation of China (No. 82072090 and No. 82371954); Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011); High-level Hospital Construction Project (No. DFJHBF202105).


Assuntos
Neoplasias Colorretais , Meios de Contraste , Aprendizado Profundo , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Colorretais/diagnóstico por imagem , Neoplasias Colorretais/diagnóstico , Feminino , Masculino , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Pessoa de Meia-Idade , Idoso , Curva ROC , Adulto , Idoso de 80 Anos ou mais
16.
JMIR AI ; 3: e52211, 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38875574

RESUMO

BACKGROUND: Many promising artificial intelligence (AI) and computer-aided detection and diagnosis systems have been developed, but few have been successfully integrated into clinical practice. This is partially owing to a lack of user-centered design of AI-based computer-aided detection or diagnosis (AI-CAD) systems. OBJECTIVE: We aimed to assess the impact of different onboarding tutorials and levels of AI model explainability on radiologists' trust in AI and the use of AI recommendations in lung nodule assessment on computed tomography (CT) scans. METHODS: In total, 20 radiologists from 7 Dutch medical centers performed lung nodule assessment on CT scans under different conditions in a simulated use study as part of a 2×2 repeated-measures quasi-experimental design. Two types of AI onboarding tutorials (reflective vs informative) and 2 levels of AI output (black box vs explainable) were designed. The radiologists first received an onboarding tutorial that was either informative or reflective. Subsequently, each radiologist assessed 7 CT scans, first without AI recommendations. AI recommendations were shown to the radiologist, and they could adjust their initial assessment. Half of the participants received the recommendations via black box AI output and half received explainable AI output. Mental model and psychological trust were measured before onboarding, after onboarding, and after assessing the 7 CT scans. We recorded whether radiologists changed their assessment on found nodules, malignancy prediction, and follow-up advice for each CT assessment. In addition, we analyzed whether radiologists' trust in their assessments had changed based on the AI recommendations. RESULTS: Both variations of onboarding tutorials resulted in a significantly improved mental model of the AI-CAD system (informative P=.01 and reflective P=.01). After using AI-CAD, psychological trust significantly decreased for the group with explainable AI output (P=.02). On the basis of the AI recommendations, radiologists changed the number of reported nodules in 27 of 140 assessments, malignancy prediction in 32 of 140 assessments, and follow-up advice in 12 of 140 assessments. The changes were mostly an increased number of reported nodules, a higher estimated probability of malignancy, and earlier follow-up. The radiologists' confidence in their found nodules changed in 82 of 140 assessments, in their estimated probability of malignancy in 50 of 140 assessments, and in their follow-up advice in 28 of 140 assessments. These changes were predominantly increases in confidence. The number of changed assessments and radiologists' confidence did not significantly differ between the groups that received different onboarding tutorials and AI outputs. CONCLUSIONS: Onboarding tutorials help radiologists gain a better understanding of AI-CAD and facilitate the formation of a correct mental model. If AI explanations do not consistently substantiate the probability of malignancy across patient cases, radiologists' trust in the AI-CAD system can be impaired. Radiologists' confidence in their assessments was improved by using the AI recommendations.

17.
Cureus ; 16(4): e58400, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38756258

RESUMO

Artificial intelligence (AI) has the ability to completely transform the healthcare industry by enhancing diagnosis, treatment, and resource allocation. To ensure patient safety and equitable access to healthcare, it also presents ethical and practical issues that need to be carefully addressed. Its integration into healthcare is a crucial topic. To realize its full potential, however, the ethical issues around data privacy, prejudice, and transparency, as well as the practical difficulties posed by workforce adaptability and statutory frameworks, must be addressed. While there is growing knowledge about the advantages of AI in healthcare, there is a significant lack of knowledge about the moral and practical issues that come with its application, particularly in the setting of emergency and critical care. The majority of current research tends to concentrate on the benefits of AI, but thorough studies that investigate the potential disadvantages and ethical issues are scarce. The purpose of our article is to identify and examine the ethical and practical difficulties that arise when implementing AI in emergency medicine and critical care, to provide solutions to these issues, and to give suggestions to healthcare professionals and policymakers. In order to responsibly and successfully integrate AI in these important healthcare domains, policymakers and healthcare professionals must collaborate to create strong regulatory frameworks, safeguard data privacy, remove prejudice, and give healthcare workers the necessary training.

18.
Technol Health Care ; 32(S1): 125-133, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38759043

RESUMO

BACKGROUND: Transrectal ultrasound-guided prostate biopsy is the gold standard diagnostic test for prostate cancer, but it is an invasive examination of non-targeted puncture and has a high false-negative rate. OBJECTIVE: In this study, we aimed to develop a computer-assisted prostate cancer diagnosis method based on multiparametric MRI (mpMRI) images. METHODS: We retrospectively collected 106 patients who underwent radical prostatectomy after diagnosis with prostate biopsy. mpMRI images, including T2 weighted imaging (T2WI), diffusion weighted imaging (DWI), and dynamic-contrast enhanced (DCE), and were accordingly analyzed. We extracted the region of interest (ROI) about the tumor and benign area on the three sequential MRI axial images at the same level. The ROI data of 433 mpMRI images were obtained, of which 202 were benign and 231 were malignant. Of those, 50 benign and 50 malignant images were used for training, and the 333 images were used for verification. Five main feature groups, including histogram, GLCM, GLGCM, wavelet-based multi-fractional Brownian motion features and Minkowski function features, were extracted from the mpMRI images. The selected characteristic parameters were analyzed by MATLAB software, and three analysis methods with higher accuracy were selected. RESULTS: Through prostate cancer identification based on mpMRI images, we found that the system uses 58 texture features and 3 classification algorithms, including Support Vector Machine (SVM), K-nearest Neighbor (KNN), and Ensemble Learning (EL), performed well. In the T2WI-based classification results, the SVM achieved the optimal accuracy and AUC values of 64.3% and 0.67. In the DCE-based classification results, the SVM achieved the optimal accuracy and AUC values of 72.2% and 0.77. In the DWI-based classification results, the ensemble learning achieved optimal accuracy as well as AUC values of 75.1% and 0.82. In the classification results based on all data combinations, the SVM achieved the optimal accuracy and AUC values of 66.4% and 0.73. CONCLUSION: The proposed computer-aided diagnosis system provides a good assessment of the diagnosis of the prostate cancer, which may reduce the burden of radiologists and improve the early diagnosis of prostate cancer.


Assuntos
Diagnóstico por Computador , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Neoplasias da Próstata/diagnóstico , Estudos Retrospectivos , Pessoa de Meia-Idade , Idoso , Diagnóstico por Computador/métodos , Detecção Precoce de Câncer/métodos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Imageamento por Ressonância Magnética/métodos
19.
Clin Chest Med ; 45(2): 249-261, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38816086

RESUMO

Early detection with accurate classification of solid pulmonary nodules is critical in reducing lung cancer morbidity and mortality. Computed tomography (CT) remains the most widely used imaging examination for pulmonary nodule evaluation; however, other imaging modalities, such as PET/CT and MRI, are increasingly used for nodule characterization. Current advances in solid nodule imaging are largely due to developments in machine learning, including automated nodule segmentation and computer-aided detection. This review explores current multi-modality solid pulmonary nodule detection and characterization with discussion of radiomics and risk prediction models.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/patologia , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Imageamento por Ressonância Magnética , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Detecção Precoce de Câncer/métodos
20.
Phys Med ; 121: 103344, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38593627

RESUMO

PURPOSE: To validate the performance of computer-aided detection (CAD) and volumetry software using an anthropomorphic phantom with a ground truth (GT) set of 3D-printed nodules. METHODS: The Kyoto Kaguku Lungman phantom, containing 3D-printed solid nodules including six diameters (4 to 9 mm) and three morphologies (smooth, lobulated, spiculated), was scanned at varying CTDIvol levels (6.04, 1.54 and 0.20 mGy). Combinations of reconstruction algorithms (iterative and deep learning image reconstruction) and kernels (soft and hard) were applied. Detection, volumetry and density results recorded by a commercially available AI-based algorithm (AVIEW LCS + ) were compared to the absolute GT, which was determined through µCT scanning at 50 µm resolution. The associations between image acquisition parameters or nodule characteristics and accuracy of nodule detection and characterization were analyzed with chi square tests and multiple linear regression. RESULTS: High levels of detection sensitivity and precision (minimal 83 % and 91 % respectively) were observed across all acquisitions. Neither reconstruction algorithm nor radiation dose showed significant associations with detection. Nodule diameter however showed a highly significant association with detection (p < 0.0001). Volumetric measurements for nodules > 6 mm were accurate within 10 % absolute range from volumeGT, regardless of dose and reconstruction. Nodule diameter and morphology are major determinants of volumetric accuracy (p < 0.001). Density assignment was not significantly influenced by any parameters. CONCLUSIONS: Our study confirms the software's accurate performance in nodule volumetry, detection and density characterization with robustness for variations in CT imaging protocols. This study suggests the incorporation of similar phantom setups in quality assurance of CAD tools.


Assuntos
Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Doses de Radiação , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Humanos , Impressão Tridimensional , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA