Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 111
Filtrar
1.
J Imaging Inform Med ; 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627268

RESUMO

Architectural distortion (AD) is one of the most common findings on mammograms, and it may represent not only cancer but also a lesion such as a radial scar that may have an associated cancer. AD accounts for 18-45% missed cancer, and the positive predictive value of AD is approximately 74.5%. Early detection of AD leads to early diagnosis and treatment of the cancer and improves the overall prognosis. However, detection of AD is a challenging task. In this work, we propose a new approach for detecting architectural distortion in mammography images by combining preprocessing methods and a novel structure fusion attention model. The proposed structure-focused weighted orientation preprocessing method is composed of the original image, the architecture enhancement map, and the weighted orientation map, highlighting suspicious AD locations. The proposed structure fusion attention model captures the information from different channels and outperforms other models in terms of false positives and top sensitivity, which refers to the maximum sensitivity that a model can achieve under the acceptance of the highest number of false positives, reaching 0.92 top sensitivity with only 0.6590 false positive per image. The findings suggest that the combination of preprocessing methods and a novel network architecture can lead to more accurate and reliable AD detection. Overall, the proposed approach offers a novel perspective on detecting ADs, and we believe that our method can be applied to clinical settings in the future, assisting radiologists in the early detection of ADs from mammography, ultimately leading to early treatment of breast cancer patients.

2.
Cancer Med ; 13(4): e7072, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38457220

RESUMO

BACKGROUND: Predictive analytics is gaining popularity as an aid to treatment planning for patients with bone metastases, whose expected survival should be considered. Decreased psoas muscle area (PMA), a morphometric indicator of suboptimal nutritional status, has been associated with mortality in various cancers, but never been integrated into current survival prediction algorithms (SPA) for patients with skeletal metastases. This study investigates whether decreased PMA predicts worse survival in patients with extremity metastases and whether incorporating PMA into three modern SPAs (PATHFx, SORG-NG, and SORG-MLA) improves their performance. METHODS: One hundred eighty-five patients surgically treated for long-bone metastases between 2014 and 2019 were divided into three PMA tertiles (small, medium, and large) based on their psoas size on CT. Kaplan-Meier, multivariable regression, and Cox proportional hazards analyses were employed to compare survival between tertiles and examine factors associated with mortality. Logistic regression analysis was used to assess whether incorporating adjusted PMA values enhanced the three SPAs' discriminatory abilities. The clinical utility of incorporating PMA into these SPAs was evaluated by decision curve analysis (DCA). RESULTS: Patients with small PMA had worse 90-day and 1-year survival after surgery (log-rank test p < 0.001). Patients in the large PMA group had a higher chance of surviving 90 days (odds ratio, OR, 3.72, p = 0.02) and 1 year than those in the small PMA group (OR 3.28, p = 0.004). All three SPAs had increased AUC after incorporation of adjusted PMA. DCA indicated increased net benefits at threshold probabilities >0.5 after the addition of adjusted PMA to these SPAs. CONCLUSIONS: Decreased PMA on CT is associated with worse survival in surgically treated patients with extremity metastases, even after controlling for three contemporary SPAs. Physicians should consider the additional prognostic value of PMA on survival in patients undergoing consideration for operative management due to extremity metastases.


Assuntos
Neoplasias Ósseas , Músculos Psoas , Humanos , Músculos Psoas/diagnóstico por imagem , Estudos Retrospectivos , Prognóstico
3.
Histopathology ; 84(6): 983-1002, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38288642

RESUMO

AIMS: Risk stratification of atypical ductal hyperplasia (ADH) and ductal carcinoma in situ (DCIS), diagnosed using breast biopsy, has great clinical significance. Clinical trials are currently exploring the possibility of active surveillance for low-risk lesions, whereas axillary lymph node staging may be considered during surgical planning for high-risk lesions. We aimed to develop a machine-learning algorithm based on whole-slide images of breast biopsy specimens and clinical information to predict the risk of upstaging to invasive breast cancer after wide excision. METHODS AND RESULTS: Patients diagnosed with ADH/DCIS on breast biopsy were included in this study, comprising 592 (740 slides) and 141 (198 slides) patients in the development and independent testing cohorts, respectively. Histological grading of the lesions was independently evaluated by two pathologists. Clinical information, including biopsy method, lesion size, and Breast Imaging Reporting and Data System (BI-RADS) classification of ultrasound and mammograms, were collected. Deep DCIS consisted of three deep neural networks to evaluate nuclear grade, necrosis, and stromal reactivity. Deep DCIS output comprised five parameters: total patches, lesion extent, Deep Grade, Deep Necrosis, and Deep Stroma. Deep DCIS highly correlated with the pathologists' evaluations of both slide- and patient-level labels. All five parameters of Deep DCIS were significantly associated with upstaging to invasive carcinoma in subsequent wide excisional specimens. Using multivariate logistic regression, Deep DCIS predicted upstaging to invasive carcinoma with an area under the curve (AUC) of 0.81, outperforming pathologists' evaluation (AUC, 0.71 and 0.69). After including clinical and hormone receptor status information, performance further improved (AUC, 0.87). This combined model retained its predictive power in two subgroup analyses: the first subgroup included unequivocal DCIS (excluding cases of ADH and DCIS suspicious for microinvasion) (AUC, 0.83), while the second excluded cases of high-grade DCIS (AUC, 0.81). The model was validated in an independent testing cohort (AUC, 0.81). CONCLUSION: This study demonstrated that deep-learning models can refine histological evaluation of ADH and DCIS on breast biopsies, which may help guide future treatment planning.


Assuntos
Neoplasias da Mama , Carcinoma Ductal de Mama , Carcinoma Intraductal não Infiltrante , Aprendizado Profundo , Humanos , Feminino , Carcinoma Intraductal não Infiltrante/patologia , Mama/patologia , Neoplasias da Mama/patologia , Biópsia , Necrose/patologia , Carcinoma Ductal de Mama/patologia , Estudos Retrospectivos , Hiperplasia/patologia
4.
Nucl Med Commun ; 45(3): 196-202, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38165173

RESUMO

OBJECTIVES: A deep learning (DL) model using image data from pretreatment [ 18 F]fluorodeoxyglucose ([ 18 F] FDG)-PET or computed tomography (CT) augmented with a novel imaging augmentation approach was developed for the early prediction of distant metastases in patients with locally advanced uterine cervical cancer. METHODS: This study used baseline [18F]FDG-PET/CT images of newly diagnosed uterine cervical cancer patients. Data from 186 to 25 patients were analyzed for training and validation cohort, respectively. All patients received chemoradiotherapy (CRT) and follow-up. PET and CT images were augmented by using three-dimensional techniques. The proposed model employed DL to predict distant metastases. Receiver operating characteristic (ROC) curve analysis was performed to measure the model's predictive performance. RESULTS: The area under the ROC curves of the training and validation cohorts were 0.818 and 0.830 for predicting distant metastasis, respectively. In the training cohort, the sensitivity, specificity, and accuracy were 80.0%, 78.0%, and 78.5%, whereas, the sensitivity, specificity, and accuracy for distant failure were 73.3%, 75.5%, and 75.2% in the validation cohort, respectively. CONCLUSION: Through the use of baseline [ 18 F]FDG-PET/CT images, the proposed DL model can predict the development of distant metastases for patients with locally advanced uterine cervical cancer treatment by CRT. External validation must be conducted to determine the model's predictive performance.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Feminino , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Fluordesoxiglucose F18 , Neoplasias do Colo do Útero/patologia , Compostos Radiofarmacêuticos , Quimiorradioterapia , Tomografia por Emissão de Pósitrons
5.
Br J Radiol ; 96(1151): 20230243, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37750945

RESUMO

OBJECTIVES: To predict KRAS mutation in rectal cancer (RC) through computer vision of [18F]fluorodeoxyglucose (18F-FDG) positron emission tomography (PET)/computed tomography (CT) by using metric learning (ML). METHODS: This study included 160 patients with RC who had undergone preoperative PET/CT. KRAS mutation was identified through polymerase chain reaction analysis. This model combined ML with the deep-learning framework to analyze PET data with or without CT images. The Batch Balance Wrapper framework and K-fold cross-validation were employed during the learning process. A receiver operating characteristic (ROC) curve analysis was performed to assess the model's predictive performance. RESULTS: Genetic alterations in KRAS were identified in 82 (51%) tumors. Both PET and CT images were used, and the proposed model had an area under the ROC curve of 0.836 for its ability to predict a mutation status. The sensitivity, specificity, and accuracy were 75.3%, 79.3%, and 77.5%, respectively. When PET images alone were used, the area under the curve was 0.817, whereas the sensitivity, specificity, and accuracy were 73.2%, 79.6%, and 76.2%, respectively. CONCLUSIONS: The ML model presented herein revealed that baseline 18F-FDG PET/CT images could provide supplemental information to determine KRAS mutation in RC. Additional studies are required to maximize the predictive accuracy. ADVANCES IN KNOWLEDGE: The results of the ML model presented herein indicate that baseline 18F-FDG PET/CT images could provide supplemental information for determining KRAS mutation in RC.The predictive accuracy of the model was 77.5% when both image types were used and 76.2% when PET images alone were used. Additional studies are required to maximize the predictive accuracy.


Assuntos
Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Neoplasias Retais , Humanos , Fluordesoxiglucose F18 , Proteínas Proto-Oncogênicas p21(ras)/genética , Neoplasias Retais/diagnóstico por imagem , Neoplasias Retais/genética , Mutação , Tomografia por Emissão de Pósitrons/métodos , Compostos Radiofarmacêuticos
6.
Exp Clin Endocrinol Diabetes ; 131(10): 508-514, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37604165

RESUMO

INTRODUCTION: The current ultrasound scan classification system for thyroid nodules is time-consuming, labor-intensive, and subjective. Artificial intelligence (AI) has been shown to increase the accuracy of predicting the malignancy rate of thyroid nodules. This study aims to demonstrate the state-of-the-art Swin Transformer to classify thyroid nodules. MATERIALS AND METHODS: Ultrasound images were collected prospectively from patients who received fine needle aspiration biopsy for thyroid nodules from January 2016 to June 2021. One hundred thirty-nine patients with malignant thyroid nodules were enrolled, while 235 patients with benign nodules served as controls. Images were fed to Swin-T and ResNeSt50 models to classify the thyroid nodules. RESULTS: Patients with malignant nodules were younger and more likely male compared to those with benign nodules. The average sensitivity and specificity of Swin-T were 82.46% and 84.29%, respectively. The average sensitivity and specificity of ResNeSt50 were 72.51% and 77.14%, respectively. Receiver operating characteristics analysis revealed that the area under the curve of Swin-T was higher (AUC=0.91) than that of ResNeSt50 (AUC=0.82). The McNemar test evaluating the performance of these models showed that Swin-T had significantly better performance than ResNeSt50.Swin-T classifier can be a useful tool in helping shared decision-making between physicians and patients with thyroid nodules, particularly in those with high-risk characteristics of sonographic patterns.


Assuntos
Aprendizado Profundo , Neoplasias da Glândula Tireoide , Nódulo da Glândula Tireoide , Humanos , Masculino , Nódulo da Glândula Tireoide/diagnóstico por imagem , Nódulo da Glândula Tireoide/patologia , Inteligência Artificial , Sensibilidade e Especificidade , Ultrassonografia/métodos , Neoplasias da Glândula Tireoide/patologia
7.
J Med Ultrason (2001) ; 50(4): 511-520, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37400724

RESUMO

PURPOSE: This study aimed to evaluate the clinical usefulness of a deep learning-based computer-aided detection (CADe) system for breast ultrasound. METHODS: The set of 88 training images was expanded to 14,000 positive images and 50,000 negative images. The CADe system was trained to detect lesions in real- time using deep learning with an improved model of YOLOv3-tiny. Eighteen readers evaluated 52 test image sets with and without CADe. Jackknife alternative free-response receiver operating characteristic analysis was used to estimate the effectiveness of this system in improving lesion detection. RESULT: The area under the curve (AUC) for image sets was 0.7726 with CADe and 0.6304 without CADe, with a 0.1422 difference, indicating that with CADe was significantly higher than that without CADe (p < 0.0001). The sensitivity per case was higher with CADe (95.4%) than without CADe (83.7%). The specificity of suspected breast cancer cases with CADe (86.6%) was higher than that without CADe (65.7%). The number of false positives per case (FPC) was lower with CADe (0.22) than without CADe (0.43). CONCLUSION: The use of a deep learning-based CADe system for breast ultrasound by readers significantly improved their reading ability. This system is expected to contribute to highly accurate breast cancer screening and diagnosis.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Curva ROC , Computadores
8.
Eur J Cardiothorac Surg ; 63(6)2023 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-36971610

RESUMO

OBJECTIVES: To mitigate the shortage of homograft sources, the use of handmade trileaflet expanded polytetrafluoroethylene valves in pulmonary valve replacement has shown excellent results from multicentre studies conducted in Japan. However, world-wide data outside Japan are relatively insufficient. This study presents the long-term results of a single surgeon's use of flipped-back trileaflet method in a 10-year case series. METHODS: We have developed an efficient way to make a trileaflet-valved conduit utilizing flipped-back method for pulmonary valve replacement and have employed the technique since 2011. Retrospective data were studied between October 2010 and January 2020. Echocardiography, electrocardiogram, Pro-Brain Natriuretic Peptide and Magnetic Resonance Imaging data were analysed. RESULTS: Fifty-five patients were reviewed and median follow-up duration was 2.9 years. The majority of diagnoses was Tetralogy of Fallot (n = 41), and these patients subsequently underwent secondary pulmonary valve replacement at a median age of 15.6 years. Survival was 92.7% with the longest follow-up period being 10 years. There was no need for reoperation, and freedom from reintervention was 98.0% at 10 years. There were 4 deaths (3 in-hospital and 1 outpatient). One patient eventually received transcatheter pulmonary valve implantation. Postoperative echocardiography showed mild or less pulmonary stenosis and pulmonary regurgitation degree in 92.2% and 92.0% of patients, respectively. Comparable magnetic resonance imaging data (n = 25) showed significant reduction in right ventricular volumes but not in ejection fractions. CONCLUSIONS: Our series showed satisfactory long-term function of handmade flipped-back trileaflet-valved conduit used in our patients. The simple design is efficiently reproducible without complex fabrication process.


Assuntos
Implante de Prótese de Valva Cardíaca , Próteses Valvulares Cardíacas , Valva Pulmonar , Obstrução do Fluxo Ventricular Externo , Humanos , Adolescente , Valva Pulmonar/cirurgia , Politetrafluoretileno , Estudos Retrospectivos , Desenho de Prótese , Resultado do Tratamento , Obstrução do Fluxo Ventricular Externo/cirurgia
9.
Comput Methods Programs Biomed ; 229: 107278, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36463674

RESUMO

BACKGROUND AND OBJECTIVE: Lung cancer has the highest cancer-related mortality worldwide, and lung nodule usually presents with no symptom. Low-dose computed tomography (LDCT) was an important tool for lung cancer detection and diagnosis. It provided a complete three-dimensional (3-D) chest image with a high resolution.Recently, convolutional neural network (CNN) had flourished and been proven the CNN-based computer-aided diagnosis (CADx) system could extract the features and help radiologists to make a preliminary diagnosis. Therefore, a 3-D ResNeXt-based CADx system was proposed to assist radiologists for diagnosis in this study. METHODS: The proposed CADx system consists of image preprocessing and a 3-D CNN-based classification model for pulmonary nodule classification. First, the image preprocessing was executed to generate the normalized volumn of interest (VOI) only including nodule information and a few surrounding tissues. Then, the extracted VOI was forwarded to the 3-D nodule classification model. In the classification model, the RestNext was employed as the backbone and the attention scheme was embedded to focus on the important features. Moreover, a multi-level feature fusion network incorporating feature information of different scales was used to enhance the prediction accuracy of small malignant nodules. Finally, a hybrid loss based on channel optimization which make the network learn more detailed information was empolyed to replace a binary cross-entropy (BCE) loss. RESULTS: In this research, there were a total of 880 low-dose CT images including 440 benign and 440 malignant nodules from the American National Lung Screening Trial (NLST) for system evaluation. The results showed that our system could achieve the accuracy of 85.3%, the sensitivity of 86.8%, the specificity of 83.9%, and the area-under-curve (AUC) value was 0.9042. It was confirmed that the designed system had a good diagnostic ability. CONCLUSION: In this study, a CADx composed of the image preprocessing and a 3-D nodule classification model with attention scheme, feature fusion, and hybrid loss was proposed for pulmonary nodule classification in LDCT. The results indicated that the proposed CADx system had potential for achieving high performance in classifying lung nodules as benign and malignant.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Redes Neurais de Computação , Pulmão/patologia , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Tomografia Computadorizada por Raios X/métodos , Diagnóstico por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador
10.
Ann Biomed Eng ; 51(2): 352-362, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35972601

RESUMO

During laparoscopic surgery, surgical gauze is usually inserted into the body cavity to help achieve hemostasis. Retention of surgical gauze in the body cavity may necessitate reoperation and increase surgical risk. Using deep learning technology, this study aimed to propose a neural network model for gauze detection from the surgical video to record the presence of the gauze. The model was trained by the training group using YOLO (You Only Look Once)v5x6, then applied to the testing group. Positive predicted value (PPV), sensitivity, and mean average precision (mAP) were calculated. Furthermore, a timeline of gauze presence in the video was drawn by the model as well as human annotation to evaluate the accuracy. After the model was well-trained, the PPV, sensitivity, and mAP in the testing group were 0.920, 0.828, and 0.881, respectively. The inference time was 11.3 ms per image. The average accuracy of the model adding a marking and filtering process was 0.899. In conclusion, surgical gauze can be successfully detected using deep learning in the surgical video. Our model provided a fast detection of surgical gauze, allowing further real-time gauze tracing in laparoscopic surgery that may help surgeons recall the location of the missing gauze.


Assuntos
Laparoscopia , Redes Neurais de Computação , Humanos , Bandagens , Reoperação
11.
Nutr Metab Cardiovasc Dis ; 32(7): 1725-1733, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35527126

RESUMO

BACKGROUND AND AIMS: The primary goals of this study were to clarify 1) the effect of weight loss by lifestyle intervention on circulating total angiopoietin-like protein 8 (ANGPTL8), and 2) the role of physical activity on serum total ANGPTL8 in northern Americans with obesity but without diabetes. METHODS AND RESULTS: A total of 130 subjects with body mass index (BMI) â‰§ 35 kg/m2 but without diabetes were recruited, and 121 subjects completed a weight loss program for data analysis. Abdominal adipose tissue was determined by non-contrast computed tomography (CT). Serum total ANGPTL8 was higher in the group with obesity than in the lean control group. Serum total ANGPTL8 was positively correlated with waist circumference (WC), BMI, fasting insulin, HOMA-IR, HOMA-B, QUICKI, hs-CRP, IL-6, and leptin. Serum total ANGPTL8 did not significantly differ between the two intervention groups at baseline, and it was significantly lower after weight loss, with comparable changes with diet only and diet plus physical activity. CONCLUSION: Among northern Americans with obesity but without diabetes, a lifestyle modification resulted in significant reduction of circulating total ANGPTL8 concentrations in a 6-month weight-loss period. Although addition of physical activity resulted in greater total and liver fat loss, it did not promote further significant decline of serum total ANGPTL8 beyond diet alone.


Assuntos
Hormônios Peptídicos , Programas de Redução de Peso , Proteína 8 Semelhante a Angiopoietina , Proteínas Semelhantes a Angiopoietina , Índice de Massa Corporal , Exercício Físico , Humanos , Obesidade/diagnóstico , Obesidade/terapia , Estudos Prospectivos , Redução de Peso
12.
Comput Methods Programs Biomed ; 220: 106786, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35398579

RESUMO

BACKGROUND AND OBJECTIVE: Lung cancer is the most common cause of cancer-related death in the world. Low-dose computed tomography (LDCT) is a widely used modality in lung cancer detection. The nodule is an abnormal tissue and may evolve into lung cancer. Hence, it is crucial to detect nodules in the early detection stage. However, reviewing the LDCT scans to observe suspicious nodules is a time-consuming task. Recently, designing a computer-aided detection (CADe) system with convolutional neural network (CNN) architecture has been proven that it is helpful for radiologists. Hence, in this study, a 3-D YOLO-based CADe system, 3-D OSAF-YOLOv3, is proposed for nodule detection in LDCT images. METHODS: The proposed CADe system consists of data preprocessing, nodule detection, and non-maximum suppression algorithm (NMS). At first, the data preprocessing including the background elimination, the spacing normalization, and the volume of interest (VOI) extraction, are conducted to remove the non-lung region, normalize the image spacing, and divide LDCT image into numerous VOIs. Then, the VOIs are fed into the 3-D OSAF-YOLOv3 model, to detect the suspicious nodules. The proposed model is constructed by integrating the 3-D YOLOv3 with the one-shot aggregation module (OSA), the receptive field block (RFB), and the feature fusion scheme (FFS). Finally, the NMS algorithm is performed to eliminate the duplicated detection generated by the model. RESULTS: In this study, the LUNA-16 dataset composed 1186 nodules from 888 LDCT scans and the competition performance metric (CPM) are used to evaluate our CADe system. In the experiment results, the proposed system can achieve a sensitivities rate of 0.962 with the false positive rate of 8 and complete a CPM value of 0.905. Moreover, according to the ablation study results, the employment of OSA module, RFB, and FFS could improve the detection performance actually. Furthermore, compared to other start-of-the-art (SOTA) models, our detection system could also achieve the higher performance. CONCLUSIONS: In this study, a YOLO-based CADe system for nodule detection in CT image system integrating additional modules and scheme is proposed for nodule detection in LDCT. The result indicates that the proposed the modification can significantly improve detection performance.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
13.
Surg Endosc ; 36(9): 6446-6455, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35132449

RESUMO

BACKGROUND: Quality indicators should be assessed and monitored to improve colonoscopy quality in clinical practice. Endoscopists must enter relevant information in the endoscopy reporting system to facilitate data collection, which may be inaccurate. The current study aimed to develop a full deep learning-based algorithm to identify and analyze intra-procedural colonoscopy quality indicators based on endoscopy images obtained during the procedure. METHODS: A deep learning system for classifying colonoscopy images for quality assurance purposes was developed and its performance was assessed with an independent dataset. The system was utilized to analyze captured images and results were compared with those of real-world reports. RESULTS: In total, 10,417 images from the hospital endoscopy database and 3157 from Hyper-Kvasir open dataset were utilized to develop the quality assurance algorithm. The overall accuracy of the algorithm was 96.72% and that of the independent test dataset was 94.71%. Moreover, 761 real-world reports and colonoscopy images were analyzed. The accuracy of electronic reports about cecal intubation rate was 99.34% and that of the algorithm was 98.95%. The agreement rate for the assessment of polypectomy rates using the electronic reports and the algorithm was 0.87 (95% confidence interval 0.83-0.90). A good correlation was found between the withdrawal time calculated using the algorithm and that entered by the physician (correlation coefficient r = 0.959, p < 0.0001). CONCLUSION: We proposed a novel deep learning-based algorithm that used colonoscopy images for quality assurance purposes. This model can be used to automatically assess intra-procedural colonoscopy quality indicators in clinical practice.


Assuntos
Colonoscopia , Aprendizado Profundo , Algoritmos , Ceco , Colonoscopia/métodos , Bases de Dados Factuais , Humanos
14.
Comput Biol Med ; 141: 105185, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34986453

RESUMO

Lymph node metastasis also called nodal metastasis (Nmet), is a clinically primary task for physicians. The survival and recurrence of lung cancer are related to the Nmet staging from Tumor-Node-Metastasis (TNM) reports. Furthermore, preoperative Nmet prediction is still a challenge for the patient in managing the surgical plan and making treatment decisions. We proposed a multi-energy level fusion model with a principal feature enhancement (PFE) block incorporating radiologist and computer science knowledge for Nmet prediction. The proposed model is custom-designed by gemstone spectral imaging (GSI) with different energy levels on dual-energy computer tomography (CT) from a primary tumor of lung cancer. In the experiment, we take three different energy level fusion datasets: lower energy level fusion (40, 50, 60, 70 keV), higher energy level fusion (110, 120, 130, 140 keV), and average energy level fusion (40, 70, 100, 140 keV). The proposed model is trained by lower energy level fusion that is 93% accurate and the value of Kappa is 86%. When we used the lower energy level images to train the fusion model, there has been a significant difference to other energy level fusion models. Hence, we apply 5-fold cross-validation, which is used to validate the performance result of the multi-keV model with different fusion datasets of energy level images in the pathology report. The cross-validation result also demonstrates that the model with the lower energy level dataset is more robust and suitable in predicting the Nmet of the primary tumor. The lower energy level shows more information of tumor angiogenesis or heterogeneity provided the proposed fusion model with a PFE block and channel attention blocks to predict Nmet from primary tumors.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Computadores , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Metástase Linfática/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
15.
Dig Endosc ; 34(5): 994-1001, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34716944

RESUMO

OBJECTIVES: Visualization and photodocumentation during endoscopy procedures are suggested to be one indicator for endoscopy performance quality. However, this indicator is difficult to measure and audit manually in clinical practice. Artificial intelligence (AI) is an emerging technology that may solve this problem. METHODS: A deep learning model with an accuracy of 96.64% was developed from 15,305 images for upper endoscopy anatomy classification in the unit. Endoscopy images for asymptomatic patients receiving screening endoscopy were evaluated with this model to assess the completeness of photodocumentation rate. RESULTS: A total of 15,723 images from 472 upper endoscopies performed by 12 endoscopists were enrolled. The complete photodocumentation rate from the pharynx to the duodenum was 53.8% and from the esophagus to the duodenum was 78.0% in this study. Endoscopists with a higher adenoma detection rate had a higher complete examination rate from the pharynx to duodenum (60.0% vs. 38.7%, P < 0.0001) and from esophagus to duodenum (83.0% vs. 65.7%, P < 0.0001) compared with endoscopists with lower adenoma detection rate. The pharynx, gastric angle, gastric retroflex view, gastric antrum, and the first portion of duodenum are likely to be missed by endoscopists with lower adenoma detection rates. CONCLUSIONS: We report the use of a deep learning model to audit endoscopy photodocumentation quality in our unit. Endoscopists with better performance in colonoscopy had a better performance for this quality indicator. The use of such an AI system may help the endoscopy unit audit endoscopy performance.


Assuntos
Adenoma , Aprendizado Profundo , Adenoma/diagnóstico , Inteligência Artificial , Colonoscopia/métodos , Endoscopia Gastrointestinal , Humanos
16.
Surg Endosc ; 36(6): 3811-3821, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34586491

RESUMO

BACKGROUND: Photodocumentation during endoscopy procedures is one of the indicators for endoscopy performance quality; however, this indicator is difficult to measure and audit in the endoscopy unit. Emerging artificial intelligence technology may solve this problem, which requires a large amount of material for model development. We developed a deep learning-based endoscopic anatomy classification system through convolutional neural networks with an accelerated data preparation approach. PATIENTS AND METHODS: We retrospectively collected 8,041 images from esophagogastroduodenoscopy (EGD) procedures and labeled them using two experts for nine anatomical locations of the upper gastrointestinal tract. A base model for EGD image multiclass classification was first developed, and an additional 6,091 images were enrolled and classified by the base model. A total of 5,963 images were manually confirmed and added to develop the subsequent enhanced model. Additional internal and external endoscopy image datasets were used to test the model performance. RESULTS: The base model achieved total accuracy of 96.29%. For the enhanced model, the total accuracy was 96.64%. The overall accuracy improved with the enhanced model compared with the base model for the internal test dataset without narrowband images (93.05% vs. 91.25%, p < 0.01) or with narrowband images (92.74% vs. 90.46%, p < 0.01). The total accuracy was 92.56% of the enhanced model on the external test dataset. CONCLUSIONS: We constructed a deep learning-based model with an accelerated approach that can be used for quality control in endoscopy units. The model was also validated with both internal and external datasets with high accuracy.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Endoscopia Gastrointestinal/métodos , Humanos , Redes Neurais de Computação , Estudos Retrospectivos
17.
NMR Biomed ; 35(3): e4642, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34738671

RESUMO

In this study, the performance of machine learning in classifying parotid gland tumors based on diffusion-related features obtained from the parotid gland tumor, the peritumor parotid gland, and the contralateral parotid gland was evaluated. Seventy-eight patients participated in this study and underwent magnetic resonance diffusion-weighted imaging. Three regions of interest, including the parotid gland tumor, the peritumor parotid gland, and the contralateral parotid gland, were manually contoured for 92 tumors, including 20 malignant tumors (MTs), 42 Warthin tumors (WTs), and 30 pleomorphic adenomas (PMAs). We recorded multiple apparent diffusion coefficient (ADC) features and applied a machine-learning method with the features to classify the three types of tumors. With only mean ADC of tumors, the area under the curve of the classification model was 0.63, 0.85, and 0.87 for MTs, WTs, and PMAs, respectively. The performance metrics were improved to 0.81, 0.89, and 0.92, respectively, with multiple features. Apart from the ADC features of parotid gland tumor, the features of the peritumor and contralateral parotid glands proved advantageous for tumor classification. Combining machine learning and multiple features provides excellent discrimination of tumor types and can be a practical tool in the clinical diagnosis of parotid gland tumors.


Assuntos
Imagem de Difusão por Ressonância Magnética/métodos , Aprendizado de Máquina , Neoplasias Parotídeas/diagnóstico por imagem , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
18.
Cancers (Basel) ; 13(24)2021 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-34944970

RESUMO

OBJECTIVES: Neoadjuvant chemoradiotherapy (NCRT) followed by surgery is the mainstay of treatment for patients with locally advanced rectal cancer. Based on baseline 18F-fluorodeoxyglucose ([18F]-FDG)-positron emission tomography (PET)/computed tomography (CT), a new artificial intelligence model using metric learning (ML) was introduced to predict responses to NCRT. PATIENTS AND METHODS: This study used the data of 236 patients with newly diagnosed rectal cancer; the data of 202 and 34 patients were for training and validation, respectively. All patients received pretreatment [18F]FDG-PET/CT, NCRT, and surgery. The treatment response was scored by Dworak tumor regression grade (TRG); TRG3 and TRG4 indicated favorable responses. The model employed ML combined with the Uniform Manifold Approximation and Projection for dimensionality reduction. A receiver operating characteristic (ROC) curve analysis was performed to assess the model's predictive performance. RESULTS: In the training cohort, 115 patients (57%) achieved TRG3 or TRG4 responses. The area under the ROC curve was 0.96 for the prediction of a favorable response. The sensitivity, specificity, and accuracy were 98.3%, 96.5%, and 97.5%, respectively. The sensitivity, specificity, and accuracy for the validation cohort were 95.0%, 100%, and 98.8%, respectively. CONCLUSIONS: The new ML model presented herein was used to determined that baseline 18F[FDG]-PET/CT images could predict a favorable response to NCRT in patients with rectal cancer. External validation is required to verify the model's predictive value.

19.
Comput Med Imaging Graph ; 91: 101935, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34090261

RESUMO

Lymph node metastasis (LNM) identification is the most clinically important tasks related to survival and recurrence from lung cancer. However, the preoperative prediction of nodal metastasis remains a challenge to determine surgical plans and pretreatment decisions in patients with cancers. We proposed a novel deep prediction method with a size-related damper block for nodal metastasis (Nmet) identification from the primary tumor in lung cancer generated by gemstone spectral imaging (GSI) dual-energy computer tomography (CT). The best model is the proposed method trained by the 40 keV dataset achieves an accuracy of 86 % and a Kappa value of 72 % for Nmet prediction. In the experiment, we have 11 different monochromatic images from 40∼140 keV (the interval is 10 keV) for each patient. When we used the model of 40 keV dataset, there has significant difference in other energy levels (unit of keV). Therefore, we apply in 5-fold cross-validation to explain the lower keV is more efficient to predict Nmet of the primary tumor. The result shows that tumor heterogeneity and size contributed to the proposed model to estimate whether absence or presence of nodal metastasis from the primary tumor.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Computadores , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Metástase Linfática/diagnóstico por imagem , Tomografia Computadorizada por Raios X
20.
Eur J Radiol ; 138: 109608, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33711572

RESUMO

PURPOSE: We propose a 3-D tumor computer-aided diagnosis (CADx) system with U-net and a residual-capsule neural network (Res-CapsNet) for ABUS images and provide a reference for early tumor diagnosis, especially non-mass lesions. METHODS: A total of 396 patients with 444 tumors (226 malignant and 218 benign) were retrospectively enrolled from Sun Yat-sen University Cancer Center. In our CADx, preprocessing was performed first to crop and resize the tumor volumes of interest (VOIs). Then, a 3-D U-net and postprocessing were applied to the VOIs to obtain tumor masks. Finally, a 3-D Res-CapsNet classification model was executed with the VOIs and the corresponding masks to diagnose the tumors. Finally, the diagnostic performance, including accuracy, sensitivity, specificity, and area under the curve (AUC), was compared with other classification models and among three readers with different years of experience in ABUS review. RESULTS: For all tumors, the accuracy, sensitivity, specificity, and AUC of the proposed CADx were 84.9 %, 87.2 %, 82.6 %, and 0.9122, respectively, outperforming other models and junior reader. Next, the tumors were subdivided into mass and non-mass tumors to validate the system performance. For mass tumors, our CADx achieved an accuracy, sensitivity, specificity, and AUC of 85.2 %, 88.2 %, 82.3 %, and 0.9147, respectively, which was higher than that of other models and junior reader. For non-mass tumors, our CADx achieved an accuracy, sensitivity, specificity, and AUC of 81.6 %, 78.3 %, 86.7 %, and 0.8654, respectively, outperforming the two readers. CONCLUSION: The proposed CADx with 3-D U-net and 3-D Res-CapsNet models has the potential to reduce misdiagnosis, especially for non-mass lesions.


Assuntos
Neoplasias da Mama , Interpretação de Imagem Assistida por Computador , Neoplasias da Mama/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Estudos Retrospectivos , Ultrassonografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA