Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
iScience ; 27(4): 109461, 2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38550997

RESUMO

Artificial intelligence (AI) has been found to assist in optical differentiation of hyperplastic and adenomatous colorectal polyps. We investigated whether AI can improve the accuracy of endoscopists' optical diagnosis of polyps with advanced features. We introduced our AI system distinguishing polyps with advanced features with more than 0.870 of accuracy in the internal and external validation datasets. All 19 endoscopists with different levels showed significantly lower diagnostic accuracy (0.410-0.580) than the AI. Prospective randomized controlled study involving 120 endoscopists into optical diagnosis of polyps with advanced features with or without AI demonstration identified that AI improved endoscopists' proportion of polyps with advanced features correctly sent for histological examination (0.960 versus 0.840, p < 0.001), and the proportion of polyps without advanced features resected and discarded (0.490 versus 0.380, p = 0.007). We thus developed an AI technique that significantly increases the accuracy of colorectal polyps with advanced features.

2.
BMC Oral Health ; 23(1): 876, 2023 11 17.
Artigo em Inglês | MEDLINE | ID: mdl-37978486

RESUMO

BACKGROUND: Accurate cephalometric analysis plays a vital role in the diagnosis and subsequent surgical planning in orthognathic and orthodontics treatment. However, manual digitization of anatomical landmarks in computed tomography (CT) is subject to limitations such as low accuracy, poor repeatability and excessive time consumption. Furthermore, the detection of landmarks has more difficulties on individuals with dentomaxillofacial deformities than normal individuals. Therefore, this study aims to develop a deep learning model to automatically detect landmarks in CT images of patients with dentomaxillofacial deformities. METHODS: Craniomaxillofacial (CMF) CT data of 80 patients with dentomaxillofacial deformities were collected for model development. 77 anatomical landmarks digitized by experienced CMF surgeons in each CT image were set as the ground truth. 3D UX-Net, the cutting-edge medical image segmentation network, was adopted as the backbone of model architecture. Moreover, a new region division pattern for CMF structures was designed as a training strategy to optimize the utilization of computational resources and image resolution. To evaluate the performance of this model, several experiments were conducted to make comparison between the model and manual digitization approach. RESULTS: The training set and the validation set included 58 and 22 samples respectively. The developed model can accurately detect 77 landmarks on bone, soft tissue and teeth with a mean error of 1.81 ± 0.89 mm. Removal of region division before training significantly increased the error of prediction (2.34 ± 1.01 mm). In terms of manual digitization, the inter-observer and intra-observer variations were 1.27 ± 0.70 mm and 1.01 ± 0.74 mm respectively. In all divided regions except Teeth Region (TR), our model demonstrated equivalent performance to experienced CMF surgeons in landmarks detection (p > 0.05). CONCLUSIONS: The developed model demonstrated excellent performance in detecting craniomaxillofacial landmarks when considering manual digitization work of expertise as benchmark. It is also verified that the region division pattern designed in this study remarkably improved the detection accuracy.


Assuntos
Aprendizado Profundo , Humanos , Tomografia Computadorizada por Raios X/métodos , Radiografia , Cefalometria/métodos , Osso e Ossos , Processamento de Imagem Assistida por Computador/métodos
3.
BMC Oral Health ; 23(1): 161, 2023 03 18.
Artigo em Inglês | MEDLINE | ID: mdl-36934241

RESUMO

BACKGROUND: Preoperative planning of orthognathic surgery is indispensable for achieving ideal surgical outcome regarding the occlusion and jaws' position. However, orthognathic surgery planning is sophisticated and highly experience-dependent, which requires comprehensive consideration of facial morphology and occlusal function. This study aimed to investigate a robust and automatic method based on deep learning to predict reposition vectors of jawbones in orthognathic surgery plan. METHODS: A regression neural network named VSP transformer was developed based on Transformer architecture. Firstly, 3D cephalometric analysis was employed to quantify skeletal-facial morphology as input features. Next, input features were weighted using pretrained results to minimize bias resulted from multicollinearity. Through encoder-decoder blocks, ten landmark-based reposition vectors of jawbones were predicted. Permutation importance (PI) method was used to calculate contributions of each feature to final prediction to reveal interpretability of the proposed model. RESULTS: VSP transformer model was developed with 383 samples and clinically tested with 49 prospectively collected samples. Our proposed model outperformed other four classic regression models in prediction accuracy. Mean absolute errors (MAE) of prediction were 1.41 mm in validation set and 1.34 mm in clinical test set. The interpretability results of the model were highly consistent with clinical knowledge and experience. CONCLUSIONS: The developed model can predict reposition vectors of orthognathic surgery plan with high accuracy and good clinically practical-effectiveness. Moreover, the model was proved reliable because of its good interpretability.


Assuntos
Aprendizado Profundo , Cirurgia Ortognática , Procedimentos Cirúrgicos Ortognáticos , Humanos , Procedimentos Cirúrgicos Ortognáticos/métodos , Radiografia , Face , Imageamento Tridimensional
4.
Eur Radiol ; 33(1): 77-88, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36029345

RESUMO

OBJECTIVES: The prediction of primary treatment failure (PTF) is necessary for patients with diffuse large B-cell lymphoma (DLBCL) since it serves as a prominent means for improving front-line outcomes. Using interim 18F-fluoro-2-deoxyglucose ([18F]FDG) positron emission tomography/computed tomography (PET/CT) imaging data, we aimed to construct multimodal deep learning (MDL) models to predict possible PTF in low-risk DLBCL. METHODS: Initially, 205 DLBCL patients undergoing interim [18F]FDG PET/CT scans and the front-line standard of care were included in the primary dataset for model development. Then, 44 other patients were included in the external dataset for generalization evaluation. Based on the powerful backbone of the Conv-LSTM network, we incorporated five different multimodal fusion strategies (pixel intermixing, separate channel, separate branch, quantitative weighting, and hybrid learning) to make full use of PET/CT features and built five corresponding MDL models. Moreover, we found the best model, that is, the hybrid learning model, and optimized it by integrating the contrastive training objective to further improve its prediction performance. RESULTS: The final model with contrastive objective optimization, named the contrastive hybrid learning model, performed best, with an accuracy of 91.22% and an area under the receiver operating characteristic curve (AUC) of 0.926, in the primary dataset. In the external dataset, its accuracy and AUC remained at 88.64% and 0.925, respectively, indicating its good generalization ability. CONCLUSIONS: The proposed model achieved good performance, validated the predictive value of interim PET/CT, and holds promise for directing individualized clinical treatment. KEY POINTS: • The proposed multimodal models achieved accurate prediction of primary treatment failure in DLBCL patients. • Using an appropriate feature-level fusion strategy can make the same class close to each other regardless of the modal heterogeneity of the data source domain and positively impact the prediction performance. • Deep learning validated the predictive value of interim PET/CT in a way that exceeded human capabilities.


Assuntos
Aprendizado Profundo , Linfoma Difuso de Grandes Células B , Humanos , Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Prognóstico , Linfoma Difuso de Grandes Células B/diagnóstico por imagem , Linfoma Difuso de Grandes Células B/terapia , Falha de Tratamento
5.
Hum Pathol ; 131: 26-37, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36481204

RESUMO

Lymphovascular invasion, specifically lymph-blood vessel invasion (LBVI), is a risk factor for metastases in breast invasive ductal carcinoma (IDC) and is routinely screened using hematoxylin-eosin histopathological images. However, routine reports only describe whether LBVI is present and does not provide other potential prognostic information of LBVI. This study aims to evaluate the clinical significance of LBVI in 685 IDC cases and explore the added predictive value of LBVI on lymph node metastases (LNM) via supervised deep learning (DL), an expert-experience embedded knowledge transfer learning (EEKT) model in 40 LBVI-positive cases signed by the routine report. Multivariate logistic regression and propensity score matching analysis demonstrated that LBVI (OR 4.203, 95% CI 2.809-6.290, P < 0.001) was a significant risk factor for LNM. Then, the EEKT model trained on 5780 image patches automatically segmented LBVI with a patch-wise Dice similarity coefficient of 0.930 in the test set and output counts, location, and morphometric features of the LBVIs. Some morphometric features were beneficial for further stratification within the 40 LBVI-positive cases. The results showed that LBVI in cases with LNM had a higher short-to-long side ratio of the minimum rectangle (MR) (0.686 vs. 0.480, P = 0.001), LBVI-to-MR area ratio (0.774 vs. 0.702, P = 0.002), and solidity (0.983 vs. 0.934, P = 0.029) compared to LBVI in cases without LNM. The results highlight the potential of DL to assist pathologists in quantifying LBVI and, more importantly, in exploring added prognostic information from LBVI.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Linfoma , Humanos , Feminino , Metástase Linfática/patologia , Neoplasias da Mama/patologia , Mama , Prognóstico , Linfoma/patologia , Linfonodos/patologia , Estudos Retrospectivos
6.
Med Phys ; 49(11): 7222-7236, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35689486

RESUMO

PURPOSE: Many deep learning methods have been developed for pulmonary lesion detection in chest computed tomography (CT) images. However, these methods generally target one particular lesion type, that is, pulmonary nodules. In this work, we intend to develop and evaluate a novel deep learning method for a more challenging task, detecting various benign and malignant mediastinal lesions with wide variations in sizes, shapes, intensities, and locations in chest CT images. METHODS: Our method for mediastinal lesion detection contains two main stages: (a) size-adaptive lesion candidate detection followed by (b) false-positive (FP) reduction and benign-malignant classification. For candidate detection, an anchor-free and one-stage detector, namely 3D-CenterNet is designed to locate suspicious regions (i.e., candidates with various sizes) within the mediastinum. Then, a 3D-SEResNet-based classifier is used to differentiate FPs, benign lesions, and malignant lesions from the candidates. RESULTS: We evaluate the proposed method by conducting five-fold cross-validation on a relatively large-scale dataset, which consists of data collected on 1136 patients from a grade A tertiary hospital. The method can achieve sensitivity scores of 84.3% ± 1.9%, 90.2% ± 1.4%, 93.2% ± 0.8%, and 93.9% ± 1.1%, respectively, in finding all benign and malignant lesions at 1/8, 1/4, ½, and 1 FPs per scan, and the accuracy of benign-malignant classification can reach up to 78.7% ± 2.5%. CONCLUSIONS: The proposed method can effectively detect mediastinal lesions with various sizes, shapes, and locations in chest CT images. It can be integrated into most existing pulmonary lesion detection systems to promote their clinical applications. The method can also be readily extended to other similar 3D lesion detection tasks.


Assuntos
Aprendizado Profundo , Humanos , Projetos de Pesquisa , Tomografia , Tomografia Computadorizada por Raios X
7.
Laryngoscope ; 132(5): 999-1007, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34622964

RESUMO

OBJECTIVES/HYPOTHESIS: To develop a deep-learning-based automatic diagnosis system for identifying nasopharyngeal carcinoma (NPC) from noncancer (inflammation and hyperplasia), using both white light imaging (WLI) and narrow-band imaging (NBI) nasopharyngoscopy images. STUDY DESIGN: Retrospective study. METHODS: A total of 4,783 nasopharyngoscopy images (2,898 WLI and 1,885 NBI) of 671 patients were collected and a novel deep convolutional neural network (DCNN) framework was developed named Siamese deep convolutional neural network (S-DCNN), which can simultaneously utilize WLI and NBI images to improve the classification performance. To verify the effectiveness of combining the above-mentioned two modal images for prediction, we compared the proposed S-DCNN with two baseline models, namely DCNN-1 (only considering WLI images) and DCNN-2 (only considering NBI images). RESULTS: In the threefold cross-validation, an overall accuracy and area under the curve of the three DCNNs achieved 94.9% (95% confidence interval [CI] 93.3%-96.5%) and 0.986 (95% CI 0.982-0.992), 87.0% (95% CI 84.2%-89.7%) and 0.930 (95% CI 0.906-0.961), and 92.8% (95% CI 90.4%-95.3%) and 0.971 (95% CI 0.953-0.992), respectively. The accuracy of S-DCNN is significantly improved compared with DCNN-1 (P-value <.001) and DCNN-2 (P-value = .008). CONCLUSION: Using the deep-learning technology to automatically diagnose NPC under nasopharyngoscopy can provide valuable reference for NPC screening. Superior performance can be obtained by simultaneously utilizing the multimodal features of NBI image and WLI image of the same patient. LEVEL OF EVIDENCE: 3 Laryngoscope, 132:999-1007, 2022.


Assuntos
Aprendizado Profundo , Neoplasias Nasofaríngeas , Endoscopia Gastrointestinal , Humanos , Imagem de Banda Estreita/métodos , Carcinoma Nasofaríngeo/diagnóstico por imagem , Neoplasias Nasofaríngeas/diagnóstico por imagem , Estudos Retrospectivos
8.
Comput Methods Programs Biomed ; 214: 106576, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34915425

RESUMO

BACKGROUND AND OBJECTIVE: Currently, the best performing methods in colonoscopy polyp detection are primarily based on deep neural networks (DNNs), which are usually trained on large amounts of labeled data. However, different hospitals use different endoscope models and set different imaging parameters, which causes the collected endoscopic images and videos to vary greatly in style. There may be variations in the color space, brightness, contrast, and resolution, and there are also differences between white light endoscopy (WLE) and narrow band image endoscopy (NBIE). We call these variations the domain shift. The DNN performance may decrease when the training data and the testing data come from different hospitals or different endoscope models. Additionally, it is quite difficult to collect enough new labeled data and retrain a new DNN model before deploying that DNN to a new hospital or endoscope model. METHODS: To solve this problem, we propose a domain adaptation model called Deep Reconstruction-Recoding Network (DRRN), which jointly learns a shared encoding representation for two tasks: i) a supervised object detection network for labeled source data, and ii) an unsupervised reconstruction-recoding network for unlabeled target data. Through the DRRN, the object detection network's encoder not only learns the features from the labeled source domain, but also encodes useful information from the unlabeled target domain. Therefore, the distribution difference of the two domains' feature spaces can be reduced. RESULTS: We evaluate the performance of the DRRN on a series of cross-domain datasets. Compared with training the polyp detection network using only source data, the performance of the DRRN on the target domain is improved. Through feature statistics and visualization, it is demonstrated that the DRRN can learn the common distribution and feature invariance of the two domains. The distribution difference between the feature spaces of the two domains can be reduced. CONCLUSION: The DRRN can improve cross-domain polyp detection. With the DRRN, the generalization performance of the DNN-based polyp detection model can be improved without additional labeled data. This improvement allows the polyp detection model to be easily transferred to datasets from different hospitals or different endoscope models.


Assuntos
Redes Neurais de Computação , Pólipos , Colonoscopia , Humanos
9.
IEEE J Biomed Health Inform ; 26(3): 1251-1262, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34613925

RESUMO

Segmentation of hepatic vessels from 3D CT images is necessary for accurate diagnosis and preoperative planning for liver cancer. However, due to the low contrast and high noises of CT images, automatic hepatic vessel segmentation is a challenging task. Hepatic vessels are connected branches containing thick and thin blood vessels, showing an important structural characteristic or a prior: the connectivity of blood vessels. However, this is rarely applied in existing methods. In this paper, we segment hepatic vessels from 3D CT images by utilizing the connectivity prior. To this end, a graph neural network (GNN) used to describe the connectivity prior of hepatic vessels is integrated into a general convolutional neural network (CNN). Specifically, a graph attention network (GAT) is first used to model the graphical connectivity information of hepatic vessels, which can be trained with the vascular connectivity graph constructed directly from the ground truths. Second, the GAT is integrated with a lightweight 3D U-Net by an efficient mechanism called the plug-in mode, in which the GAT is incorporated into the U-Net as a multi-task branch and is only used to supervise the training procedure of the U-Net with the connectivity prior. The GAT will not be used in the inference stage, and thus will not increase the hardware and time costs of the inference stage compared with the U-Net. Therefore, hepatic vessel segmentation can be well improved in an efficient mode. Extensive experiments on two public datasets show that the proposed method is superior to related works in accuracy and connectivity of hepatic vessel segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional
10.
Med Phys ; 48(12): 7913-7929, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34674280

RESUMO

PURPOSE: Feature maps created from deep convolutional neural networks (DCNNs) have been widely used for visual explanation of DCNN-based classification tasks. However, many clinical applications such as benign-malignant classification of lung nodules normally require quantitative and objective interpretability, rather than just visualization. In this paper, we propose a novel interpretable multi-task attention learning network named IMAL-Net for early invasive adenocarcinoma screening in chest computed tomography images, which takes advantage of segmentation prior to assist interpretable classification. METHODS: Two sub-ResNets are firstly integrated together via a prior-attention mechanism for simultaneous nodule segmentation and invasiveness classification. Then, numerous radiomic features from the segmentation results are concatenated with high-level semantic features from the classification subnetwork by FC layers to achieve superior performance. Meanwhile, an end-to-end feature selection mechanism (named FSM) is designed to quantify crucial radiomic features greatly affecting the prediction of each sample, and thus it can provide clinically applicable interpretability to the prediction result. RESULTS: Nodule samples from a total of 1626 patients were collected from two grade-A hospitals for large-scale verification. Five-fold cross validation demonstrated that the proposed IMAL-Net can achieve an AUC score of 93.8% ± 1.1% and a recall score of 93.8% ± 2.8% for identification of invasive lung adenocarcinoma. CONCLUSIONS: It can be concluded that fusing semantic features and radiomic features can achieve obvious improvements in the invasiveness classification task. Moreover, by learning more fine-grained semantic features and highlighting the most important radiomics features, the proposed attention and FSM mechanisms not only can further improve the performance but also can be used for both visual explanations and objective analysis of the classification results.


Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Neoplasias Pulmonares , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma de Pulmão/diagnóstico por imagem , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
11.
BMC Bioinformatics ; 22(1): 434, 2021 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-34507532

RESUMO

BACKGROUND: One of the major challenges in precision medicine is accurate prediction of individual patient's response to drugs. A great number of computational methods have been developed to predict compounds activity using genomic profiles or chemical structures, but more exploration is yet to be done to combine genetic mutation, gene expression, and cheminformatics in one machine learning model. RESULTS: We presented here a novel deep-learning model that integrates gene expression, genetic mutation, and chemical structure of compounds in a multi-task convolutional architecture. We applied our model to the Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE) datasets. We selected relevant cancer-related genes based on oncology genetics database and L1000 landmark genes, and used their expression and mutations as genomic features in model training. We obtain the cheminformatics features for compounds from PubChem or ChEMBL. Our finding is that combining gene expression, genetic mutation, and cheminformatics features greatly enhances the predictive performance. CONCLUSION: We implemented an extended Graph Neural Network for molecular graphs and Convolutional Neural Network for gene features. With the employment of multi-tasking and self-attention functions to monitor the similarity between compounds, our model outperforms recently published methods using the same training and testing datasets.


Assuntos
Antineoplásicos , Aprendizado Profundo , Neoplasias , Preparações Farmacêuticas , Antineoplásicos/farmacologia , Antineoplásicos/uso terapêutico , Genômica , Humanos , Neoplasias/tratamento farmacológico , Neoplasias/genética
12.
Med Phys ; 48(7): 3665-3678, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33735451

RESUMO

PURPOSE: Diffuse large B-cell lymphoma (DLBCL) is an aggressive type of lymphoma with high mortality and poor prognosis that especially has a high incidence in Asia. Accurate segmentation of DLBCL lesions is crucial for clinical radiation therapy. However, manual delineation of DLBCL lesions is tedious and time-consuming. Automatic segmentation provides an alternative solution but is difficult for diffuse lesions without the sufficient utilization of multimodality information. Our work is the first study focusing on positron emission tomography and computed tomography (PET-CT) feature fusion for the DLBCL segmentation issue. We aim to improve the fusion performance of complementary information contained in PET-CT imaging with a hybrid learning module in the supervised convolutional neural network. METHODS: First, two encoder branches extract single-modality features, respectively. Next, the hybrid learning component utilizes them to generate spatial fusion maps which can quantify the contribution of complementary information. Such feature fusion maps are then concatenated with specific-modality (i.e., PET and CT) feature maps to obtain a representation of the final fused feature maps in different scales. Finally, the reconstruction part of our network creates a prediction map of DLBCL lesions by integrating and up-sampling the final fused feature maps from encoder blocks in different scales. RESULTS: The ability of our method was evaluated to detect foreground and segment lesions in three independent body regions (nasopharynx, chest, and abdomen) of a set of 45 PET-CT scans. Extensive ablation experiments compared our method to four baseline techniques for multimodality fusion (input-level (IL) fusion, multichannel (MC) strategy, multibranch (MB) strategy, and quantitative weighting (QW) fusion). The results showed that our method achieved a high detection accuracy (99.63% in the nasopharynx, 99.51% in the chest, and 99.21% in the abdomen) and had the superiority in segmentation performance with the mean dice similarity coefficient (DSC) of 73.03% and the modified Hausdorff distance (MHD) of 4.39 mm, when compared with the baselines (DSC: IL: 53.08%, MC: 63.59%, MB: 69.98%, and QW: 72.19%; MHD: IL: 12.16 mm, MC: 6.46 mm, MB: 4.83 mm, and QW: 4.89 mm). CONCLUSIONS: A promising segmentation method has been proposed for the challenging DLBCL lesions in PET-CT images, which improves the understanding of complementary information by feature fusion and may guide clinical radiotherapy. The statistically significant analysis based on P-value calculation has indicated a degree of significant difference between our proposed method and other baselines (almost metrics: P < 0.05). This is a preliminary research using a small sample size, and we will collect data continually to achieve the larger verification study.


Assuntos
Linfoma Difuso de Grandes Células B , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Processamento de Imagem Assistida por Computador , Linfoma Difuso de Grandes Células B/diagnóstico por imagem , Redes Neurais de Computação
13.
Transl Lung Cancer Res ; 10(12): 4574-4586, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35070762

RESUMO

BACKGROUND: Clinical management of subsolid nodules (SSNs) is defined by the suspicion of tumor invasiveness. We sought to develop an artificial intelligent (AI) algorithm for invasiveness assessment of lung adenocarcinoma manifesting as radiological SSNs. We investigated the performance of this algorithm in classification of SSNs related to invasiveness. METHODS: A retrospective chest computed tomography (CT) dataset of 1,589 SSNs was constructed to develop (85%) and internally test (15%) the proposed AI diagnostic tool, SSNet. Diagnostic performance was evaluated in the hold-out test set and was further tested in an external cohort of 102 SSNs. Three thoracic surgeons and three radiologists were required to evaluate the invasiveness of SSNs on both test datasets to investigate the clinical utility of the proposed SSNet. RESULTS: In the differentiation of invasive adenocarcinoma (IA), SSNet achieved a similar area under the curve [AUC; 0.914, 95% confidence interval (CI): 0.813-0.987] with that of the 6 doctors (0.900, 95% CI: 0.867-0.922). When interpreting with the assistance of SSNet, the sensitivity of junior doctors, specificity of senior doctor, and their accuracy were significantly improved. In the external test, SSNet (AUC: 0.949, 95% CI: 0.884-1.000) achieved a better AUC than doctors (AUC: 0.883, 95% CI: 0.826-0.939) whose AUC increased (AUC: 0.908, 95% CI: 0.847-0.982) with SSNet assistance. In the histological subtype classifications, SSNet achieved better performance than practicing doctors. The AUCs of doctors were significantly improved with the assistance of SSNet in both 4-category and 3-category classifications to 0.836 (95% CI: 0.811-0.862) and 0.852 (95% CI: 0.825-0.882), respectively. CONCLUSIONS: The AI diagnostic system achieved non-inferior performance to doctors, and will potentially improve diagnostic performance and efficiency in SSN evaluation.

14.
Front Oncol ; 10: 568857, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33134170

RESUMO

OBJECTIVE: To assess the performance of pretreatment 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) radiomics features for predicting EGFR mutation status in patients with non-small cell lung cancer (NSCLC). PATIENTS AND METHODS: We enrolled total 173 patients with histologically proven NSCLC who underwent preoperative 18F-FDG PET/CT. Tumor tissues of all patients were tested for EGFR mutation status. A PET/CT radiomics prediction model was established through multi-step feature selection. The predictive performances of radiomics model, clinical features and conventional PET-derived semi-quantitative parameters were compared using receiver operating curves (ROCs) analysis. RESULTS: Four CT and two PET radiomics features were finally selected to build the PET/CT radiomics model. Compared with area under the ROC curve (AUC) equal to 0.664, 0.683 and 0.662 for clinical features, maximum standardized uptake values (SUVmax) and total lesion glycolysis (TLG), the PET/CT radiomics model showed better performance to discriminate between EGFR positive and negative mutations with the AUC of 0.769 and the accuracy of 67.06% after 10-fold cross-validation. The combined model, based on the PET/CT radiomics and clinical feature (gender) further improved the AUC to 0.827 and the accuracy to 75.29%. Only one PET radiomics feature demonstrated significant but low predictive ability (AUC = 0.661) for differentiating 19 Del from 21 L858R mutation subtypes. CONCLUSIONS: EGFR mutations status in patients with NSCLC could be well predicted by the combined model based on 18F-FDG PET/CT radiomics and clinical feature, providing an alternative useful method for the selection of targeted therapy.

15.
Front Oncol ; 10: 568069, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33194653

RESUMO

BACKGROUND: Patients with non-calcified hamartoma were more susceptible to surgery or needle biopsy for the tough discrimination from lung adenocarcinoma. Radiomics have the ability to quantify the lesion features and potentially improve disease diagnosis. Thus, this study aimed to discriminate non-calcified hamartoma from adenocarcinoma by employing imaging quantification and machine learning. METHODS: Forty-two patients with non-calcified hamartoma and 49 patients with adenocarcinoma were retrospentation; Manual lesion segmentation, feature quantification (e.g., texture features), and artificial neural network were performed consecutively. Independent t-test was used to conduct the inter-group comparisons of those imaging features. Receiver operating characteristic curve was performed to investigate the discriminating efficacy. RESULTS: Significantly higher contrast, cluster prominence, cluster shade, dissimilarity, energy, and entropy in non-calcified hamartoma were observed compared with lung adenocarcinoma. Texture-grey-level co-occurrence matrix showed a well discrimination between non-calcified hamartoma and adenocarcinoma as the detection sensitivity, specificity, accuracy, and the area under the curve were 87.22% ± 9.07%, 82.64% ± 8.07%, 85.11% ± 5.40%, and 0.942, respectively. CONCLUSION: Quantifying imaging features is a potentially useful tool for clinical diagnosis. This study demonstrated that non-calcified hamartoma has a heterogeneous distribution of attenuations probably resulting from its complex organizations. Based on this property, imaging quantification could improve discrimination of non-calcified hamartoma from adenocarcinoma.

16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1616-1619, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018304

RESUMO

Semantic segmentation is a fundamental and challenging problem in medical image analysis. At present, deep convolutional neural network plays a dominant role in medical image segmentation. The existing problems of this field are making less use of image information and learning few edge features, which may lead to the ambiguous boundary and inhomogeneous intensity distribution of the result. Since the characteristics of different stages are highly inconsistent, these two cannot be directly combined. In this paper, we proposed the Attention and Edge Constraint Network (AEC-Net) to optimize features by introducing attention mechanisms in the lower-level features, so that it can be better combined with higher-level features. Meanwhile, an edge branch is added to the network which can learn edge and texture features simultaneously. We evaluated this model on three datasets, including skin cancer segmentation, vessel segmentation, and lung segmentation. Results demonstrate that the proposed model has achieved state-of-the-art performance on all datasets.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Cutâneas , Atenção , Humanos , Pulmão/diagnóstico por imagem , Redes Neurais de Computação
17.
Med Phys ; 47(4): 1738-1749, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32020649

RESUMO

PURPOSE: In clinical practice, invasiveness is an important reference indicator for differentiating the malignant degree of subsolid pulmonary nodules. These nodules can be classified as atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IAC). The automatic determination of a nodule's invasiveness based on chest CT scans can guide treatment planning. However, it is challenging, owing to the insufficiency of training data and their interclass similarity and intraclass variation. To address these challenges, we propose a two-stage deep learning strategy for this task: prior-feature learning followed by adaptive-boost deep learning. METHODS: The adaptive-boost deep learning is proposed to train a strong classifier for invasiveness classification of subsolid nodules in chest CT images, using multiple 3D convolutional neural network (CNN)-based weak classifiers. Because ensembles of multiple deep 3D CNN models have a huge number of parameters and require large computing resources along with more training and testing time, the prior-feature learning is proposed to reduce the computations by sharing the CNN layers between all weak classifiers. Using this strategy, all weak classifiers can be integrated into a single network. RESULTS: Tenfold cross validation of binary classification was conducted on a total of 1357 nodules, including 765 noninvasive (AAH and AIS) and 592 invasive nodules (MIA and IAC). Ablation experimental results indicated that the proposed binary classifier achieved an accuracy of 73.4 \% ± 1.4 with an AUC of 81.3 \% ± 2.2 . These results are superior compared to those achieved by three experienced chest imaging specialists who achieved an accuracy of 69.1 \% , 69.3 \% , and 67.9 \% , respectively. About 200 additional nodules were also collected. These nodules covered 50 cases for each category (AAH, AIS, MIA, and IAC, respectively). Both binary and multiple classifications were performed on these data and the results demonstrated that the proposed method definitely achieves better performance than the performance achieved by nonensemble deep learning methods. CONCLUSIONS: It can be concluded that the proposed adaptive-boost deep learning can significantly improve the performance of invasiveness classification of pulmonary subsolid nodules in CT images, while the prior-feature learning can significantly reduce the total size of deep models. The promising results on clinical data show that the trained models can be used as an effective lung cancer screening tool in hospitals. Moreover, the proposed strategy can be easily extended to other similar classification tasks in 3D medical images.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Tomografia Computadorizada por Raios X , Humanos , Invasividade Neoplásica
18.
ACS Appl Mater Interfaces ; 11(33): 29569-29578, 2019 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-31361117

RESUMO

Circulating tumor cells (CTCs) in blood is the direct cause of tumor metastasis. The isolation and detection of CTCs in the whole blood is very important and of clinical value in early diagnosis, postoperative review, and personalized treatment. It is difficult to separate all types of CTCs that efficiently rely on a single path due to cancer cell heterogenicity. Here, we designed a new kind of "filter chip" for the retention of CTCs with very high efficiency by integrating the effects of cell size and specific antigens on the surface of tumor cells. The filter chip consists of a semicircle arc and arrays and can separate large-scale CTC microspheres, which combined with CTCs automatically. We synthesized interfacial zinc oxide coating with nanostructure on the surface of the microsphere to increase the specific surface area to enhance the capturing efficiency of CTCs. Microspheres, trapped in the arrays, would entrap CTCs, too. The combination of the three kinds of strategies resulted in more than 90% capture efficiency of different tumor cell lines. Furthermore, it is easy to find and isolate the circulating tumor cells from the chip as tumor cells would be fixed inside the structure of a filter chip. To avoid the high background contamination when a few CTCs are surrounded by millions of nontarget cells, a digital detection method was applied to improve the detection sensitivity. The CTCs in the whole blood were specifically labeled by the antibody-DNA conjugates and detected via the DNA of the conjugates with a signal amplification. The strategy of the antibody-functional microsphere-integrated microchip for cell sorting and detection of CTCs may find broad implications that favor the fundamental cancer biology research, the precise diagnosis, and monitoring of cancer in the clinics.


Assuntos
Anticorpos/química , Microfluídica/métodos , Microesferas , Células Neoplásicas Circulantes , Óxido de Zinco/química , Células HeLa , Humanos , Células MCF-7 , Nanofios/química
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 1637-1640, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946210

RESUMO

In urology endoscopic procedures, the Ureteral Orifice (UO) finding is crucial but may be challenging for inexperienced doctors. Generally, it is difficult to identify UOs intraoperatively due to the presence of a large median lobe, obstructing tumor, previous surgery, etc. To automatically identify various types of UOs in the video, we propose a real-time deep learning system in UO identification and localization in urinary endoscopy videos, and it can be applied to different types of urinary endoscopes. Our UO detection system is mainly based on Single Shot MultiBox Detector (SSD), which is one of the state-of-the-art deep-learning based detection networks in natural image domain. For the preprocessing, we apply both general and specific data augmentation strategies which have significantly improved all evaluation metrics. For the training steps, we only utilize rescetoscopy images which have more complex background information, and then, we use ureteroscopy images for testing. Simultaneously, we demonstrate that the model trained with rescetoscopy images can be successfully applied in the other type of urinary endoscopy images with four evaluation metrics (precision, recall, F1 and F2 scores) greater than 0.8. We further evaluate our model based on four independent video datasets which comprise both rescectoscopy videos and ureteroscopy videos. Extensive experiments on the four video datasets demonstrate that our deep-learning based UO detection system can identify and locate UOs of two different urinary endoscopes in real time with average processing time equal to 25 ms per frame and simultaneously achieve satisfactory recall and specificity.


Assuntos
Aprendizado Profundo , Ureter , Endoscopia
20.
Nanomicro Lett ; 11(1): 20, 2019 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-34137997

RESUMO

A simple, convenient, and highly sensitive bio-interface for graphene field-effect transistors (GFETs) based on multifunctional nano-denatured bovine serum albumin (nano-dBSA) functionalization was developed to target cancer biomarkers. The novel graphene-protein bioelectronic interface was constructed by heating to denature native BSA on the graphene substrate surface. The formed nano-dBSA film served as the cross-linker to immobilize monoclonal antibody against carcinoembryonic antigen (anti-CEA mAb) on the graphene channel activated by EDC and Sulfo-NHS. The nano-dBSA film worked as a self-protecting layer of graphene to prevent surface contamination by lithographic processing. The improved GFET biosensor exhibited good specificity and high sensitivity toward the target at an ultralow concentration of 337.58 fg mL-1. The electrical detection of the binding of CEA followed the Hill model for ligand-receptor interaction, indicating the negative binding cooperativity between CEA and anti-CEA mAb with a dissociation constant of 6.82 × 10-10 M. The multifunctional nano-dBSA functionalization can confer a new function to graphene-like 2D nanomaterials and provide a promising bio-functionalization method for clinical application in biosensing, nanomedicine, and drug delivery.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA