Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38937280

RESUMO

OBJECTIVES: To develop and validate a modified deep learning (DL) model based on nnU-net for classifying and segmenting five-class jaw lesions using cone-beam computed tomography (CBCT). METHODS: A total of 368 CBCT scans (37 168 slices) were used to train a multi-class segmentation model. The data underwent manual annotation by two oral and maxillofacial surgeons (OMSs) to serve as ground truth. Sensitivity, specificity, precision, F1-score, and accuracy were used to evaluate the classification ability of the model and doctors, with or without artificial intelligence assistance. The dice similarity coefficient (DSC), average symmetric surface distance (ASSD) and segmentation time were used to evaluate the segmentation effect of the model. RESULTS: The model achieved the dual task of classifying and segmenting jaw lesions in CBCT. For classification, the sensitivity, specificity, precision, and accuracy of the model were 0.871, 0.974, 0.874 and 0.891, respectively, surpassing oral and maxillofacial radiologists (OMFRs) and OMSs, approaching the specialist. With the model's assistance, the classification performance of OMFRs and OMSs improved, particularly for odontogenic keratocyst (OKC) and ameloblastoma (AM), with F1-score improvements ranging from 6.2% to 12.7%. For segmentation, the DSC was 87.2% and the ASSD was 1.359 mm. The model's average segmentation time was 40 ± 9.9 s, contrasting with 25 ± 7.2 min for OMSs. CONCLUSIONS: The proposed DL model accurately and efficiently classified and segmented five classes of jaw lesions using CBCT. In addition, it could assist doctors in improving classification accuracy and segmentation efficiency, particularly in distinguishing confusing lesions (e.g., AM and OKC).

2.
Int J Neural Syst ; 34(7): 2450033, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38623651

RESUMO

Surgical reconstruction of mandibular defects is a clinical routine manner for the rehabilitation of patients with deformities. The mandible plays a crucial role in maintaining the facial contour and ensuring the speech and mastication functions. The repairing and reconstruction of mandible defects is a significant yet challenging task in oral-maxillofacial surgery. Currently, the mainly available methods are traditional digitalized design methods that suffer from substantial artificial operations, limited applicability and high reconstruction error rates. An automated, precise, and individualized method is imperative for maxillofacial surgeons. In this paper, we propose a Stage-wise Residual Attention Generative Adversarial Network (SRA-GAN) for mandibular defect reconstruction. Specifically, we design a stage-wise residual attention mechanism for generator to enhance the extraction capability of mandibular remote spatial information, making it adaptable to various defects. For the discriminator, we propose a multi-field perceptual network, consisting of two parallel discriminators with different perceptual fields, to reduce the cumulative reconstruction errors. Furthermore, we design a self-encoder perceptual loss function to ensure the correctness of mandibular anatomical structures. The experimental results on a novel custom-built mandibular defect dataset demonstrate that our method has a promising prospect in clinical application, achieving the best Dice Similarity Coefficient (DSC) of 94.238% and 95% Hausdorff Distance (HD95) of 4.787.


Assuntos
Mandíbula , Reconstrução Mandibular , Redes Neurais de Computação , Humanos , Mandíbula/cirurgia , Reconstrução Mandibular/métodos , Atenção/fisiologia
3.
IEEE Trans Med Imaging ; 42(1): 317-328, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36178994

RESUMO

Radiographic attributes of lung nodules remedy the shortcomings of lung cancer computer-assisted diagnosis systems, which provides interpretable diagnostic reference for doctors. However, current studies fail to dedicate multi-label classification of lung nodules using convolutional neural networks (CNNs) and are inferior in exploiting statistical dependency between the labels. In addition, data imbalance is an indispensable problem to be reckoned with when employing CNNs to perform lung nodule classification. It introduces greater challenges especially in the multi-label classification. In this paper, we propose a method called MLSL-Net to discriminate lung nodule characteristics and simultaneously address the challenges. Particularly, the proposal employs multi-label softmax loss (MLSL) as the performance index, aiming to reduce the ranking errors between the labels and within the labels during training, thereby optimizing ranking loss and AUC directly. Such criterions can better evaluate the classifier's performance on the multi-label imbalanced dataset. Furthermore, a scale factor is introduced based on the investigation of the max surrogate function. Different from preceding usages, the small factor is used so that to narrow the discrepancy of gradients produced by different labels. More interestingly, this factor also facilitates the exploit of label dependency. Experimental results on the LIDC-IDRI dataset as well as another akin dataset demonstrate that MLSL-Net can effectively perform multi-label classification despite the imbalance issue. Meanwhile, the results confirm the responsibility of the factor for capturing label correlations, accordingly leading to more accurate predictions.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Pulmão
4.
Comput Methods Programs Biomed ; 229: 107290, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36502546

RESUMO

BACKGROUND AND OBJECTIVES: There is a noticeable gap in diagnostic evidence strength between the thick and thin scans of Low-Dose CT (LDCT) for pulmonary nodule detection. When the thin scans are needed is unknown, especially when aided with an artificial intelligence nodule detection system. METHODS: A case study is conducted with a set of 1,000 pulmonary nodule screening LDCT scans with both thick (5.0mm), and thin (1.0mm) section scans available. Pulmonary nodule detection is performed by human and artificial intelligence models for nodule detection developed using 3D convolutional neural networks (CNNs). The intra-sample consistency is evaluated with thick and thin scans, for both clinical doctor and NN (neural network) models. Free receiver operating characteristic (FROC) is used to measure the accuracy of humans and NNs. RESULTS: Trained NNs outperform humans with small nodules < 6.0mm, which is a good complement to human ability. For nodules > 6.0mm, human and NNs perform similarly while human takes a fractional advantage. By allowing a few more FPs, a significant sensitivity improvement can be achieved with NNs. CONCLUSIONS: There is a performance gap between the thick and thin scans for pulmonary nodule detection regarding both false negatives and false positives. NNs can help reduce false negatives when the nodules are small and trade off the false negatives for sensitivity. A combination of human and trained NNs is a promising way to achieve a fast and accurate diagnosis.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Inteligência Artificial , Neoplasias Pulmonares/diagnóstico por imagem , Nódulo Pulmonar Solitário/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Interpretação de Imagem Radiográfica Assistida por Computador
5.
Polymers (Basel) ; 15(23)2023 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-38232015

RESUMO

The continuous growth in global energy and chemical raw material demand has drawn significant attention to the development of heavy oil resources. A primary challenge in heavy oil extraction lies in reducing crude oil viscosity. Alkali-surfactant-polymer (ASP) flooding technology has emerged as an effective method for enhancing heavy oil recovery. However, the chromatographic separation of chemical agents presents a formidable obstacle in heavy oil extraction. To address this challenge, we utilized a free radical polymerization method, employing acrylamide, 2-acrylamido-2-methylpropane sulfonic acid, lauryl acrylate, and benzyl acrylate as raw materials. This approach led to the synthesis of a multifunctional amphiphilic polymer known as PAALB, which we applied to the extraction of heavy oil. The structure of PAALB was meticulously characterized using techniques such as infrared spectroscopy and Nuclear Magnetic Resonance Spectroscopy. To assess the effectiveness of PAALB in reducing heavy oil viscosity and enhancing oil recovery, we conducted a series of tests, including contact angle measurements, interfacial tension assessments, self-emulsification experiments, critical association concentration tests, and sand-packed tube flooding experiments. The research findings indicate that PAALB can reduce oil-water displacement, reduce heavy oil viscosity, and improve swept volume upon injection into the formation. A solution of 5000 mg/L PAALB reduced the contact angle of water droplets on the core surface from 106.55° to 34.95°, shifting the core surface from oil-wet to water-wet, thereby enabling oil-water displacement. Moreover, A solution of 10,000 mg/L PAALB reduced the oil-water interfacial tension to 3.32 × 10-4 mN/m, reaching an ultra-low interfacial tension level, thereby inducing spontaneous emulsification of heavy oil within the formation. Under the condition of an oil-water ratio of 7:3, a solution of 10,000 mg/L PAALB can reduce the viscosity of heavy oil from 14,315 mPa·s to 201 mPa·s via the glass bottle inversion method, with a viscosity reduction rate of 98.60%. In sand-packed tube flooding experiments, under the injection volume of 1.5 PV, PAALB increased the recovery rate by 25.63% compared to traditional hydrolyzed polyacrylamide (HPAM) polymer. The insights derived from this research on amphiphilic polymers hold significant reference value for the development and optimization of chemical flooding strategies aimed at enhancing heavy oil recovery.

6.
Front Biosci (Landmark Ed) ; 27(7): 212, 2022 07 04.
Artigo em Inglês | MEDLINE | ID: mdl-35866406

RESUMO

BACKGROUND: Existing challenges of lung cancer screening included non-accessibility of computed tomography (CT) scanners and inter-reader variability, especially in resource-limited areas. The combination of mobile CT and deep learning technique has inspired innovations in the routine clinical practice. METHODS: This study recruited participants prospectively in two rural sites of western China. A deep learning system was developed to assist clinicians to identify the nodules and evaluate the malignancy with state-of-the-art performance assessed by recall, free-response receiver operating characteristic curve (FROC), accuracy (ACC), area under the receiver operating characteristic curve (AUC). RESULTS: This study enrolled 12,360 participants scanned by mobile CT vehicle, and detected 9511 (76.95%) patients with pulmonary nodules. Majority of participants were female (8169, 66.09%), and never-smokers (9784, 79.16%). After 1-year follow-up, 86 patients were diagnosed with lung cancer, with 80 (93.03%) of adenocarcinoma, and 73 (84.88%) at stage I. This deep learning system was developed to detect nodules (recall of 0.9507; FROC of 0.6470) and stratify the risk (ACC of 0.8696; macro-AUC of 0.8516) automatically. CONCLUSIONS: A novel model for lung cancer screening, the integration mobile CT with deep learning, was proposed. It enabled specialists to increase the accuracy and consistency of workflow and has potential to assist clinicians in detecting early-stage lung cancer effectively.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiplos , Detecção Precoce de Câncer/métodos , Feminino , Humanos , Neoplasias Pulmonares/patologia , Masculino , Nódulos Pulmonares Múltiplos/patologia , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
7.
Front Oncol ; 12: 683792, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35646699

RESUMO

Objectives: Distinction of malignant pulmonary nodules from the benign ones based on computed tomography (CT) images can be time-consuming but significant in routine clinical management. The advent of artificial intelligence (AI) has provided an opportunity to improve the accuracy of cancer risk prediction. Methods: A total of 8950 detected pulmonary nodules with complete pathological results were retrospectively enrolled. The different radiological manifestations were identified mainly as various nodules densities and morphological features. Then, these nodules were classified into benign and malignant groups, both of which were subdivided into finer specific pathological types. Here, we proposed a deep convolutional neural network for the assessment of lung nodules named DeepLN to identify the radiological features and predict the pathologic subtypes of pulmonary nodules. Results: In terms of density, the area under the receiver operating characteristic curves (AUCs) of DeepLN were 0.9707 (95% confidence interval, CI: 0.9645-0.9765), 0.7789 (95%CI: 0.7569-0.7995), and 0.8950 (95%CI: 0.8822-0.9088) for the pure-ground glass opacity (pGGO), mixed-ground glass opacity (mGGO) and solid nodules. As for the morphological features, the AUCs were 0.8347 (95%CI: 0.8193-0.8499) and 0.9074 (95%CI: 0.8834-0.9314) for spiculation and lung cavity respectively. For the identification of malignant nodules, our DeepLN algorithm achieved an AUC of 0.8503 (95%CI: 0.8319-0.8681) in the test set. Pertaining to predicting the pathological subtypes in the test set, the multi-task AUCs were 0.8841 (95%CI: 0.8567-0.9083) for benign tumors, 0.8265 (95%CI: 0.8004-0.8499) for inflammation, and 0.8022 (95%CI: 0.7616-0.8445) for other benign ones, while AUCs were 0.8675 (95%CI: 0.8525-0.8813) for lung adenocarcinoma (LUAD), 0.8792 (95%CI: 0.8640-0.8950) for squamous cell carcinoma (LUSC), 0.7404 (95%CI: 0.7031-0.7782) for other malignant ones respectively in the malignant group. Conclusions: The DeepLN based on deep learning algorithm represented a competitive performance to predict the imaging characteristics, malignancy and pathologic subtypes on the basis of non-invasive CT images, and thus had great possibility to be utilized in the routine clinical workflow.

8.
Am J Orthod Dentofacial Orthop ; 161(3): e250-e259, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34802868

RESUMO

INTRODUCTION: Cephalometry plays an important role in the diagnosis and treatment of orthodontics and orthognathic surgery. This study intends to develop an automatic landmark location system to make cephalometry more convenient. METHODS: In this study, 512 lateral cephalograms were collected, and 37 landmarks were included. The coordinates of all landmarks in the 512 films were obtained to establish a labeled dataset: 312 were used as a training set, 100 as a validation set, and 100 as a testing set. An automatic landmark location system based on the convolutional neural network was developed. This system consisted of a global detection module and a locally modified module. The lateral cephalogram was first fed into the global module to obtain an initial estimate of the landmark's position, which was then adjusted with the locally modified module to improve accuracy. Mean radial error (MRE) and success detection rate (SDR) within the range of 1-4 mm were used to evaluate the method. RESULTS: The MRE of our validation set was 1.127 ± 1.028 mm, and SDR of 1.0, 1.5, 2.0, 2.5, 3.0, and 4.0 mm were respectively 45.95%, 89.19%, 97.30%, 97.30%, and 97.30%. The MRE of our testing set was 1.038 ± 0.893 mm, and SDR of 1.0, 1.5, 2.0, 2.5, 3.0, and 4.0 mm were respectively 54.05%, 91.89%, 97.30%, 100%, 100%, and 100%. CONCLUSIONS: In this study, we proposed a new automatic landmark location system on the basis of the convolutional neural network. The system could detect 37 landmarks with high accuracy. All landmarks are commonly used in clinical practice and could meet the requirements of different cephalometric analysis methods.


Assuntos
Redes Neurais de Computação , Ortodontia , Pontos de Referência Anatômicos/diagnóstico por imagem , Cefalometria/métodos , Humanos , Radiografia , Reprodutibilidade dos Testes
9.
ACS Omega ; 6(43): 28587-28597, 2021 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-34746554

RESUMO

To reduce the cost of synthetic organic corrosion inhibitors in corrosion protection, dye wastewater exhibiting a synergistic effect is used with organic corrosion inhibitors to reduce the amount of high-cost molecules. The corrosion inhibition effects of the cationic dye methylene blue (MB) and the anionic dye methyl orange (MO) are tested. The test methods include electrochemical methods, weight-loss tests, and so on. MB exhibits better performance on the tested steel, with the anticorrosion efficiency reaching as high as 75.40%, which is chosen as an additive for organic corrosion inhibitors. After that, an organic inhibitor decamethylene bis-pyridinium dibromide (DBP) is selected for compounding with MB, and the corrosion inhibition effect under different ratios is tested. Similar effects of the compound inhibitor to the pristine sample are obtained at a ratio of MB/DBP = 6:4. In addition to experiments, theoretical calculations have also confirmed that the addition of dye molecules can inhibit corrosion. This research not only provides a way to reuse dye wastewater but also proposes measures to reduce the cost of organic corrosion inhibitors and, at the same time, provides new ideas for environmental protection and metal protection.

10.
Int J Comput Assist Radiol Surg ; 16(6): 895-904, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33846890

RESUMO

PURPOSE: The robust and automatic segmentation of the pulmonary lobe is vital to surgical planning and regional image analysis of pulmonary related diseases in real-time Computer Aided Diagnosis systems. While a number of studies have examined this issue, the segmentation of unclear borders of the five lobes of the lung remains challenging because of incomplete fissures, the diversity of anatomical pulmonary information, and obstructive lesions caused by pulmonary diseases. This study proposes a model called Regularized Pulmonary Lobe Segmentation Network to accurately predict the lobes as well as the borders. METHODS: First, a 3D fully convolutional network is constructed to extract contextual features from computed tomography images. Second, multi-task learning is employed to learn the segmentations of the lobes and the borders between them to train the neural network to better predict the borders via shared representation. Third, a 3D depth-wise separable de-convolution block is proposed for deep supervision to efficiently train the network. We also propose a hybrid loss function by combining cross-entropy loss with focal loss using adaptive parameters to focus on the tissues and the borders of the lobes. RESULTS: Experiments are conducted on a dataset annotated by experienced clinical radiologists. A 4-fold cross-validation result demonstrates that the proposed approach can achieve a mean dice coefficient of 0.9421 and average symmetric surface distance of 1.3546 mm, which is comparable to state of the art methods. The proposed approach has the capability to accurately segment voxels that are near the lung wall and fissure. CONCLUSION: In this paper, a 3D fully convolutional networks framework is proposed to segment pulmonary lobes in chest CT images accurately. Experimental results show the effectiveness of the proposed approach in segmenting the tissues as well as the borders of the lobes.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Pneumopatias/diagnóstico , Pulmão/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Humanos
11.
Int J Comput Assist Radiol Surg ; 16(2): 219-230, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33464450

RESUMO

PURPOSE: Airway tree segmentation plays a pivotal role in chest computed tomography (CT) analysis tasks such as lesion localization, surgical planning, and intra-operative guidance. The remaining challenge is to identify small bronchi correctly, which facilitates further segmentation of the pulmonary anatomies. METHODS: A three-dimensional (3D) multi-scale feature aggregation network (MFA-Net) is proposed against the scale difference of substructures in airway tree segmentation. In this model, the multi-scale feature aggregation (MFA) block is used to capture the multi-scale context information, which improves the sensitivity of the small bronchi segmentation and addresses the local discontinuities. Meanwhile, the concept of airway tree partition is introduced to evaluate the segmentation performance at a more granular level. RESULTS: Experiments were conducted on a dataset of 250 CT scans, which were annotated by experienced clinical radiologists. Through the airway partition, we evaluated the segmentation results of the small bronchi compared with the state-of-the-art methods. Experiments show that MFA-Net achieves the best performance in the Dice similarity coefficient (DSC) in the intra-lobar airway and improves the true positive rate (TPR) by 7.59% on average. Besides, in the entire airway, the proposed method achieves the best results in DSC and TPR scores of 86.18% and 79.31%, respectively, with the consequence of higher false positives. CONCLUSION: The MFA-Net is competitive with the state-of-the-art methods. The experiment results indicate that the MFA block improves the performance of the network by utilizing multi-scale context information. More accurate segmentation results will be more helpful in further clinical analysis.


Assuntos
Brônquios/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Tórax/diagnóstico por imagem , Humanos , Tomografia Computadorizada por Raios X/métodos
12.
J Oncol ; 2021: 5499385, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35003258

RESUMO

OBJECTIVE: The detection of epidermal growth factor receptor (EGFR) mutation and programmed death ligand-1 (PD-L1) expression status is crucial to determine the treatment strategies for patients with non-small-cell lung cancer (NSCLC). Recently, the rapid development of radiomics including but not limited to deep learning techniques has indicated the potential role of medical images in the diagnosis and treatment of diseases. METHODS: Eligible patients diagnosed/treated at the West China Hospital of Sichuan University from January 2013 to April 2019 were identified retrospectively. The preoperative CT images were obtained, as well as the gene status regarding EGFR mutation and PD-L1 expression. Tumor region of interest (ROI) was delineated manually by experienced respiratory specialists. We used 3D convolutional neural network (CNN) with ROI information as input to construct a classification model and established a prognostic model combining deep learning features and clinical features to stratify survival risk of lung cancer patients. RESULTS: The whole cohort (N = 1262) was divided into a training set (N = 882, 70%), validation set (N = 125, 10%), and test set (N = 255, 20%). We used a 3D convolutional neural network (CNN) to construct a prediction model, with AUCs of 0.96 (95% CI: 0.94-0.98), 0.80 (95% CI: 0.72-0.88), and 0.73 (95% CI: 0.63-0.83) in the training, validation, and test cohorts, respectively. The combined prognostic model showed a good performance on survival prediction in NSCLC patients (C-index: 0.71). CONCLUSION: In this study, a noninvasive and effective model was proposed to predict EGFR mutation and PD-L1 expression status as a clinical decision support tool. Additionally, the combination of deep learning features with clinical features demonstrated great stratification capabilities in the prognostic model. Our team would continue to explore the application of imaging markers for treatment selection of lung cancer patients.

13.
Ann Transl Med ; 8(18): 1126, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33240975

RESUMO

BACKGROUND: Lung cancer causes more deaths worldwide than any other cancer. For early-stage patients, low-dose computed tomography (LDCT) of the chest is considered to be an effective screening measure for reducing the risk of mortality. The accuracy and efficiency of cancer screening would be enhanced by an intelligent and automated system that meets or surpasses the diagnostic capabilities of human experts. METHODS: Based on the artificial intelligence (AI) technique, i.e., deep neural network (DNN), we designed a framework for lung cancer screening. First, a semi-automated annotation strategy was used to label the images for training. Then, the DNN-based models for the detection of lung nodules (LNs) and benign or malignancy classification were proposed to identify lung cancer from LDCT images. Finally, the constructed DNN-based LN detection and identification system was named as DeepLN and confirmed using a large-scale dataset. RESULTS: A dataset of multi-resolution LDCT images was constructed and annotated by a multidisciplinary group and used to train and evaluate the proposed models. The sensitivity of LN detection was 96.5% and 89.6% in a thin section subset [the free-response receiver operating characteristic (FROC) is 0.716] and a thick section subset (the FROC is 0.699), respectively. With an accuracy of 92.46%±0.20%, a specificity of 95.93%±0.47%, and a precision of 90.46%±0.93%, an ensemble result of benign or malignancy identification demonstrated a very good performance. Three retrospective clinical comparisons of the DeepLN system with human experts showed a high detection accuracy of 99.02%. CONCLUSIONS: In this study, we presented an AI-based system with the potential to improve the performance and work efficiency of radiologists in lung cancer screening. The effectiveness of the proposed system was verified through retrospective clinical evaluation. Thus, the future application of this system is expected to help patients and society.

14.
ACS Omega ; 5(34): 21420-21427, 2020 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-32905364

RESUMO

The colloidal instability index (CII) has been widely used for testing the stability of asphaltenes as a classical method. In this study, five oil samples were tested by the CII method; the results obtained greatly differ from the real field results. In our investigation, we combined the Turbiscan LAB stability analyzer and saturate, aromatic, resin, and asphaltene (SARA) analysis to further investigate the asphaltene stability by heptane titration. The results revealed that there exists a threshold volume ratio before the asphaltenes destabilize. The stability of crude oil is related to the saturation solubility of asphaltenes. By testing the CII value of the crude oil in its current state and the CII value of the dissolved asphaltenes in its saturated state, we were able to propose a new way to judge the oil stability.

15.
Med Image Anal ; 65: 101772, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32674041

RESUMO

The accurate identification of malignant lung nodules using computed tomography (CT) screening images is vital for the early detection of lung cancer. It also offers patients the best chance of cure, because non-invasive CT imaging has the ability to capture intra-tumoral heterogeneity. Deep learning methods have obtained promising results for the malignancy identification problem; however, two substantial challenges still remain. First, small datasets cannot insufficiently train the model and tend to overfit it. Second, category imbalance in the data is a problem. In this paper, we propose a method called MSCS-DeepLN that evaluates lung nodule malignancy and simultaneously solves these two problems. Three light models are trained and combined to evaluate the malignancy of a lung nodule. Three-dimensional convolutional neural networks (CNNs) are employed as the backbone of each light model to extract the lung nodule features from CT images and preserve lung nodule spatial heterogeneity. Multi-scale input cropped from CT images enables the sub-networks to learn the multi-level contextual features and preserve diverse. To tackle the imbalance problem, our proposed method employs an AUC approximation as the penalty term. During training, the error in this penalty term is generated from each major and minor class pair, so that negatives and positives can contribute equally to updating this model. Based on these methods, we obtain state-of-the-art results on the LIDC-IDRI dataset. Furthermore, we constructed a new dataset collected from a grade-A tertiary hospital and annotated using biopsy-based cytological analysis to verify the performance of our method in clinical practice.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
16.
Colloids Surf B Biointerfaces ; 194: 111150, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32559603

RESUMO

Two chitosan derivatives were synthesized for the first time as green corrosion inhibitors for the carbon dioxide corrosion of P110 steel. The structures of the synthesized products were characterized by infrared spectroscopy. Electrochemical and weight-loss experiments were used to test the effect of corrosion inhibitors, while SEM-EDS, AFM and other analysis methods were used to study the protection mechanism of corrosion inhibitors. The experimental results show that synthetic corrosion inhibitors CHC and CAHC are all good corrosion inhibitors for carbon dioxide corrosion inhibition. Both chitosan derivatives can form hydrophobic protective films on the metal surface. For inhibition performance, CAHC is better than CHC, which is the same conclusion drawn from practical experiments and quantum chemical calculations. Investigation into chitosan inhibitors has opened up a new area of research of environmentally friendly corrosion inhibitors, which is of great significance for metal protection without toxicity and side effects.


Assuntos
Quitosana , Aço , Dióxido de Carbono , Corrosão , Propriedades de Superfície
17.
Med Image Anal ; 61: 101666, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32062155

RESUMO

Automatic segmentation of organs at risk is crucial to aid diagnoses and remains a challenging task in medical image analysis domain. To perform the segmentation, we use multi-task learning (MTL) to accurately determine the contour of organs at risk in CT images. We train an encoder-decoder network for two tasks in parallel. The main task is the segmentation of organs, entailing a pixel-level classification in the CT images, and the auxiliary task is the multi-label classification of organs, entailing an image-level multi-label classification of the CT images. To boost the performance of the multi-label classification, we propose a weighted mean cross entropy loss function for the network training, where the weights are the global conditional probability between two organs. Based on MTL, we optimize the false positive filtering (FPF) algorithm to decrease the number of falsely segmented organ pixels in the CT images. Specifically, we propose a dynamic threshold selection (DTS) strategy to prevent true positive rates from decreasing when using the FPF algorithm. We validate these methods on the public ISBI 2019 segmentation of thoracic organs at risk (SegTHOR) challenge dataset and a private medical organ dataset. The experimental results show that networks using our proposed methods outperform basic encoder-decoder networks without increasing the training time complexity.


Assuntos
Redes Neurais de Computação , Órgãos em Risco , Radioterapia , Tórax/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Humanos
18.
IEEE J Biomed Health Inform ; 24(6): 1762-1771, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31670685

RESUMO

Lung cancer postoperative complication prediction (PCP) is significant for decreasing the perioperative mortality rate after lung cancer surgery. In this paper we concentrate on two PCP tasks: (1) the binary classification for predicting whether a patient will have postoperative complications; and (2) the three-class multi-label classification for predicting which postoperative complication a patient will experience. Furthermore, an important clinical requirement of PCP is the extraction of crucial variables from electronic medical records. We propose a novel multi-layer perceptron (MLP) model called medical MLP (MediMLP) together with the gradient-weighted class activation mapping (Grad-CAM) algorithm for lung cancer PCP. The proposed MediMLP, which involves one locally connected layer and fully connected layers with a shortcut connection, simultaneously extracts crucial variables and performs PCP tasks. The experimental results indicated that MediMLP outperformed normal MLP on two PCP tasks and had comparable performance with existing feature selection methods. Using MediMLP and further experimental analysis, we found that the variable of "time of indwelling drainage tube" was very relevant to lung cancer postoperative complications.


Assuntos
Neoplasias Pulmonares/cirurgia , Redes Neurais de Computação , Complicações Pós-Operatórias/diagnóstico , Feminino , Humanos , Masculino , Aplicações da Informática Médica , Modelos Estatísticos , Complicações Pós-Operatórias/prevenção & controle
19.
Front Oncol ; 10: 588990, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33552965

RESUMO

Survival analysis is important for guiding further treatment and improving lung cancer prognosis. It is a challenging task because of the poor distinguishability of features and the missing values in practice. A novel multi-task based neural network, SurvNet, is proposed in this paper. The proposed SurvNet model is trained in a multi-task learning framework to jointly learn across three related tasks: input reconstruction, survival classification, and Cox regression. It uses an input reconstruction mechanism cooperating with incomplete-aware reconstruction loss for latent feature learning of incomplete data with missing values. Besides, the SurvNet model introduces a context gating mechanism to bridge the gap between survival classification and Cox regression. A new real-world dataset of 1,137 patients with IB-IIA stage non-small cell lung cancer is collected to evaluate the performance of the SurvNet model. The proposed SurvNet achieves a higher concordance index than the traditional Cox model and Cox-Net. The difference between high-risk and low-risk groups obtained by SurvNet is more significant than that of high-risk and low-risk groups obtained by the other models. Moreover, the SurvNet outperforms the other models even though the input data is randomly cropped and it achieves better generalization performance on the Surveillance, Epidemiology, and End Results Program (SEER) dataset.

20.
Precis Clin Med ; 3(3): 214-227, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35694416

RESUMO

Lung cancer is one of the most leading causes of death throughout the world, and there is an urgent requirement for the precision medical management of it. Artificial intelligence (AI) consisting of numerous advanced techniques has been widely applied in the field of medical care. Meanwhile, radiomics based on traditional machine learning also does a great job in mining information through medical images. With the integration of AI and radiomics, great progress has been made in the early diagnosis, specific characterization, and prognosis of lung cancer, which has aroused attention all over the world. In this study, we give a brief review of the current application of AI and radiomics for precision medical management in lung cancer.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA