Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
PeerJ Comput Sci ; 10: e1945, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660171

RESUMO

Field-road classification, which automatically identifies in-field activities and out-of-field activities in global navigation satellite system (GNSS) recordings, is an important step for the performance evaluation of agricultural machinery. Although several field-road classification methods based only on GNSS recordings have been proposed, there is a trade-off between time consumption and accuracy performance for such methods. To obtain an optimal balance, it is important to choose a suitable field-road classification method for each trajectory based on its GNSS trajectory quality. In this article, a trajectory classification task was proposed, which classifies the quality of GNSS trajectories into three categories (high-quality, medium-quality, or low-quality). Then, a trajectory classification (TC) model was developed to automatically assign a quality category to each input trajectory, utilizing global and local features specific to agricultural machinery. Finally, a novel field-road classification method is proposed, wherein the selection of field-road classification methods depends on the trajectory quality category predicted by the TC model. The comprehensive experiments show that the proposed trajectory classification method achieved 86.84% accuracy, which consistently outperformed current trajectory classification methods by about 2.6%, and the proposed field-road classification method has obtained a balance between efficiency and effectiveness, i.e., sufficient efficiency with a tolerable accuracy loss. This is the first attempt to examine the balance problem between efficiency and effectiveness in existing field-road classification methods and to propose a trajectory classification specific to these methods.

2.
Respir Res ; 24(1): 299, 2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-38017476

RESUMO

OBJECTIVES: Parametric response mapping (PRM) enables the evaluation of small airway disease (SAD) at the voxel level, but requires both inspiratory and expiratory chest CT scans. We hypothesize that deep learning PRM from inspiratory chest CT scans can effectively evaluate SAD in individuals with normal spirometry. METHODS: We included 537 participants with normal spirometry, a history of smoking or secondhand smoke exposure, and divided them into training, tuning, and test sets. A cascaded generative adversarial network generated expiratory CT from inspiratory CT, followed by a UNet-like network predicting PRM using real inspiratory CT and generated expiratory CT. The performance of the prediction is evaluated using SSIM, RMSE and dice coefficients. Pearson correlation evaluated the correlation between predicted and ground truth PRM. ROC curves evaluated predicted PRMfSAD (the volume percentage of functional small airway disease, fSAD) performance in stratifying SAD. RESULTS: Our method can generate expiratory CT of good quality (SSIM 0.86, RMSE 80.13 HU). The predicted PRM dice coefficients for normal lung, emphysema, and fSAD regions are 0.85, 0.63, and 0.51, respectively. The volume percentages of emphysema and fSAD showed good correlation between predicted and ground truth PRM (|r| were 0.97 and 0.64, respectively, p < 0.05). Predicted PRMfSAD showed good SAD stratification performance with ground truth PRMfSAD at thresholds of 15%, 20% and 25% (AUCs were 0.84, 0.78, and 0.84, respectively, p < 0.001). CONCLUSION: Our deep learning method generates high-quality PRM using inspiratory chest CT and effectively stratifies SAD in individuals with normal spirometry.


Assuntos
Asma , Aprendizado Profundo , Enfisema , Doença Pulmonar Obstrutiva Crônica , Enfisema Pulmonar , Humanos , Tomografia Computadorizada por Raios X/métodos , Pulmão/diagnóstico por imagem
3.
IEEE Trans Med Imaging ; PP2023 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-37695967

RESUMO

Automatic rib labeling and anatomical centerline extraction are common prerequisites for various clinical applications. Prior studies either use in-house datasets that are inaccessible to communities, or focus on rib segmentation that neglects the clinical significance of rib labeling. To address these issues, we extend our prior dataset (RibSeg) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2, with 660 CT scans (15,466 individual ribs in total) and annotations manually inspected by experts for rib labeling and anatomical centerline extraction. Based on the RibSeg v2, we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction. To improve computational efficiency, we propose a sparse point cloud representation of CT scans and compare it with standard dense voxel grids. Moreover, we design and analyze evaluation metrics to address the key challenges of each task. Our dataset, code, and model are available online to facilitate open research at https://github.com/M3DV/RibSeg.

4.
Front Med (Lausanne) ; 10: 1145846, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37275359

RESUMO

In the clinic, it is difficult to distinguish the malignancy and aggressiveness of solid pulmonary nodules (PNs). Incorrect assessments may lead to delayed diagnosis and an increased risk of complications. We developed and validated a deep learning-based model for the prediction of malignancy as well as local or distant metastasis in solid PNs based on CT images of primary lesions during initial diagnosis. In this study, we reviewed the data from multiple patients with solid PNs at our institution from 1 January 2019 to 30 April 2022. The patients were divided into three groups: benign, Ia-stage lung cancer, and T1-stage lung cancer with metastasis. Each cohort was further split into training and testing groups. The deep learning system predicted the malignancy and metastasis status of solid PNs based on CT images, and then we compared the malignancy prediction results among four different levels of clinicians. Experiments confirmed that human-computer collaboration can further enhance diagnostic accuracy. We made a held-out testing set of 134 cases, with 689 cases in total. Our convolutional neural network model reached an area under the ROC (AUC) of 80.37% for malignancy prediction and an AUC of 86.44% for metastasis prediction. In observer studies involving four clinicians, the proposed deep learning method outperformed a junior respiratory clinician and a 5-year respiratory clinician by considerable margins; it was on par with a senior respiratory clinician and was only slightly inferior to a senior radiologist. Our human-computer collaboration experiment showed that by simply adding binary human diagnosis into model prediction probabilities, model AUC scores improved to 81.80-88.70% when combined with three out of four clinicians. In summary, the deep learning method can accurately diagnose the malignancy of solid PNs, improve its performance when collaborating with human experts, predict local or distant metastasis in patients with T1-stage lung cancer, and facilitate the application of precision medicine.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37028322

RESUMO

Lung cancer is the leading cause of cancer death worldwide. The best solution for lung cancer is to diagnose the pulmonary nodules in the early stage, which is usually accomplished with the aid of thoracic computed tomography (CT). As deep learning thrives, convolutional neural networks (CNNs) have been introduced into pulmonary nodule detection to help doctors in this labor-intensive task and demonstrated to be very effective. However, the current pulmonary nodule detection methods are usually domain-specific, and cannot satisfy the requirement of working in diverse real-world scenarios. To address this issue, we propose a slice grouped domain attention (SGDA) module to enhance the generalization capability of the pulmonary nodule detection networks. This attention module works in the axial, coronal, and sagittal directions. In each direction, we divide the input feature into groups, and for each group, we utilize a universal adapter bank to capture the feature subspaces of the domains spanned by all pulmonary nodule datasets. Then the bank outputs are combined from the perspective of domain to modulate the input group. Extensive experiments demonstrate that SGDA enables substantially better multi-domain pulmonary nodule detection performance compared with the state-of-the-art multi-domain learning methods.

6.
Eur Radiol ; 33(5): 3092-3102, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36480027

RESUMO

OBJECTIVE: To construct a new pulmonary nodule diagnostic model with high diagnostic efficiency, non-invasive and simple to measure. METHODS: This study included 424 patients with radioactive pulmonary nodules who underwent preoperative 7-autoantibody (7-AAB) panel testing, CT-based AI diagnosis, and pathological diagnosis by surgical resection. The patients were randomly divided into a training set (n = 212) and a validation set (n = 212). The nomogram was developed through forward stepwise logistic regression based on the predictive factors identified by univariate and multivariate analyses in the training set and was verified internally in the verification set. RESULTS: A diagnostic nomogram was constructed based on the statistically significant variables of age as well as CT-based AI diagnostic, 7-AAB panel, and CEA test results. In the validation set, the sensitivity, specificity, positive predictive value, and AUC were 82.29%, 90.48%, 97.24%, and 0.899 (95%[CI], 0.851-0.936), respectively. The nomogram showed significantly higher sensitivity than the 7-AAB panel test result (82.29% vs. 35.88%, p < 0.001) and CEA (82.29% vs. 18.82%, p < 0.001); it also had a significantly higher specificity than AI diagnosis (90.48% vs. 69.04%, p = 0.022). For lesions with a diameter of ≤ 2 cm, the specificity of the Nomogram was higher than that of the AI diagnostic system (90.00% vs. 67.50%, p = 0.022). CONCLUSIONS: Based on the combination of a 7-AAB panel, an AI diagnostic system, and other clinical features, our Nomogram demonstrated good diagnostic performance in distinguishing lung nodules, especially those with ≤ 2 cm diameters. KEY POINTS: • A novel diagnostic model of lung nodules was constructed by combining high-specific tumor markers with a high-sensitivity artificial intelligence diagnostic system. • The diagnostic model has good diagnostic performance in distinguishing malignant and benign pulmonary nodules, especially for nodules smaller than 2 cm. • The diagnostic model can assist the clinical decision-making of pulmonary nodules, with the advantages of high diagnostic efficiency, noninvasive, and simple measurement.


Assuntos
Neoplasias Pulmonares , Nódulos Pulmonares Múltiplos , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/cirurgia , Inteligência Artificial , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Nódulos Pulmonares Múltiplos/cirurgia , Autoanticorpos , Tomografia Computadorizada por Raios X/métodos , Estudos Retrospectivos
7.
Front Med (Lausanne) ; 9: 939434, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36405608

RESUMO

Objective: This study aimed to assess the value of radiomics based on non-contrast computed tomography (NCCT) and contrast-enhanced computed tomography (CECT) images in the preoperative discrimination between lung invasive adenocarcinomas (IAC) and non-invasive adenocarcinomas (non-IAC). Methods: We enrolled 1,185 pulmonary nodules (478 non-IACs and 707 IACs) to build and validate radiomics models. An external testing set comprising 63 pulmonary nodules was collected to verify the generalization of the models. Radiomic features were extracted from both NCCT and CECT images. The predictive performance of radiomics models in the validation and external testing sets were evaluated and compared with radiologists' evaluations. The predictive performances of the radiomics models were also compared between three subgroups in the validation set (Group 1: solid nodules, Group 2: part-solid nodules, and Group 3: pure ground-glass nodules). Results: The NCCT, CECT, and combined models showed good ability to discriminate between IAC and non-IAC [respective areas under the curve (AUCs): validation set = 0.91, 0.90, and 0.91; Group 1 = 0.82, 0.79, and 0.81; Group 2 = 0.93, 0.92, and 0.93; and Group 3 = 0.90, 0.90, and 0.89]. In the external testing set, the AUC of the three models were 0.89, 0.91, and 0.89, respectively. The accuracies of these three models were comparable to those of the senior radiologist and better those that of the junior radiologist. Conclusion: Radiomic models based on CT images showed good predictive performance in discriminating between lung IAC and non-IAC, especially in part solid nodule group. However, radiomics based on CECT images provided no additional value compared to NCCT images.

8.
Cancers (Basel) ; 14(15)2022 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-35954342

RESUMO

To investigate the value of the deep learning method in predicting the invasiveness of early lung adenocarcinoma based on irregularly sampled follow-up computed tomography (CT) scans. In total, 351 nodules were enrolled in the study. A new deep learning network based on temporal attention, named Visual Simple Temporal Attention (ViSTA), was proposed to process irregularly sampled follow-up CT scans. We conducted substantial experiments to investigate the supplemental value in predicting the invasiveness using serial CTs. A test set composed of 69 lung nodules was reviewed by three radiologists. The performance of the model and radiologists were compared and analyzed. We also performed a visual investigation to explore the inherent growth pattern of the early adenocarcinomas. Among counterpart models, ViSTA showed the best performance (AUC: 86.4% vs. 60.6%, 75.9%, 66.9%, 73.9%, 76.5%, 78.3%). ViSTA also outperformed the model based on Volume Doubling Time (AUC: 60.6%). ViSTA scored higher than two junior radiologists (accuracy of 81.2% vs. 75.4% and 71.0%) and came close to the senior radiologist (85.5%). Our proposed model using irregularly sampled follow-up CT scans achieved promising accuracy in evaluating the invasiveness of the early stage lung adenocarcinoma. Its performance is comparable with senior experts and better than junior experts and traditional deep learning models. With further validation, it can potentially be applied in clinical practice.

9.
Front Oncol ; 12: 772770, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35186727

RESUMO

OBJECTIVES: EGFR testing is a mandatory step before targeted therapy for non-small cell lung cancer patients. Combining some quantifiable features to establish a predictive model of EGFR expression status, break the limitations of tissue biopsy. MATERIALS AND METHODS: We retrospectively analyzed 1074 patients of non-small cell lung cancer with complete reports of EGFR gene testing. Then manually segmented VOI, captured the clinicopathological features, analyzed traditional radiology features, and extracted radiomic, and deep learning features. The cases were randomly divided into training and test set. We carried out feature screening; then applied the light GBM algorithm, Resnet-101 algorithm, logistic regression to develop sole models, and fused models to predict EGFR mutation conditions. The efficiency of models was evaluated by ROC and PRC curves. RESULTS: We successfully established Modelclinical, Modelradiomic, ModelCNN (based on clinical-radiology, radiomic and deep learning features respectively), Modelradiomic+clinical (combining clinical-radiology and radiomic features), and ModelCNN+radiomic+clinical (combining clinical-radiology, radiomic, and deep learning features). Among the prediction models, ModelCNN+radiomic+clinical showed the highest performance, followed by ModelCNN, and then Modelradiomic+clinical. All three models were able to accurately predict EGFR mutation with AUC values of 0.751, 0.738, and 0.684, respectively. There was no significant difference in the AUC values between ModelCNN+radiomic+clinical and ModelCNN. Further analysis showed that ModelCNN+radiomic+clinical effectively improved the efficacy of Modelradiomic+clinical and showed better efficacy than ModelCNN. The inclusion of clinical-radiology features did not effectively improve the efficacy of Modelradiomic. CONCLUSIONS: Either deep learning or radiomic signature-based models can provide a fairly accurate non-invasive prediction of EGFR expression status. The model combined both features effectively enhanced the performance of radiomic models and provided marginal enhancement to deep learning models. Collectively, fusion models offer a novel and more reliable way of providing the efficacy of currently developed prediction models, and have far-reaching potential for the optimization of noninvasive EGFR mutation status prediction methods.

10.
Front Oncol ; 11: 658138, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33937070

RESUMO

OBJECTIVES: To investigate the value of imaging in predicting the growth rate of early lung adenocarcinoma. METHODS: From January 2012 to June 2018, 402 patients with pathology-confirmed lung adenocarcinoma who had two or more thin-layer CT follow-up images were retrospectively analyzed, involving 407 nodules. Two complete preoperative CT images and complete clinical data were evaluated. Training and validation sets were randomly assigned according to an 8:2 ratio. All cases were divided into fast-growing and slow-growing groups. Researchers extracted 1218 radiomics features from each volumetric region of interest (VOI). Then, radiomics features were selected by repeatability analysis and Analysis of Variance (ANOVA); Based on the Univariate and multivariate analyses, the significant radiographic features is selected in training set. A decision tree algorithm was conducted to establish the radiographic model, radiomics model and the combined radiographic-radiomics model. Model performance was assessed by the area under the curve (AUC) obtained by receiver operating characteristic (ROC) analysis. RESULTS: Sixty-two radiomics features and one radiographic features were selected for predicting the growth rate of pulmonary nodules. The combined radiographic-radiomics model (AUC 0.78) performed better than the radiographic model (0.727) and the radiomics model (0.710) in the validation set. CONCLUSIONS: The model has good clinical application value and development prospects to predict the growth rate of early lung adenocarcinoma through the combined radiographic-radiomics model.

11.
EBioMedicine ; 62: 103106, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33186809

RESUMO

BACKGROUND: Diagnosis of rib fractures plays an important role in identifying trauma severity. However, quickly and precisely identifying the rib fractures in a large number of CT images with increasing number of patients is a tough task, which is also subject to the qualification of radiologist. We aim at a clinically applicable automatic system for rib fracture detection and segmentation from CT scans. METHODS: A total of 7,473 annotated traumatic rib fractures from 900 patients in a single center were enrolled into our dataset, named RibFrac Dataset, which were annotated with a human-in-the-loop labeling procedure. We developed a deep learning model, named FracNet, to detect and segment rib fractures. 720, 60 and 120 patients were randomly split as training cohort, tuning cohort and test cohort, respectively. Free-Response ROC (FROC) analysis was used to evaluate the sensitivity and false positives of the detection performance, and Intersection-over-Union (IoU) and Dice Coefficient (Dice) were used to evaluate the segmentation performance of predicted rib fractures. Observer studies, including independent human-only study and human-collaboration study, were used to benchmark the FracNet with human performance and evaluate its clinical applicability. A annotated subset of RibFrac Dataset, including 420 for training, 60 for tuning and 120 for test, as well as our code for model training and evaluation, was open to research community to facilitate both clinical and engineering research. FINDINGS: Our method achieved a detection sensitivity of 92.9% with 5.27 false positives per scan and a segmentation Dice of 71.5%on the test cohort. Human experts achieved much lower false positives per scan, while underperforming the deep neural networks in terms of detection sensitivities with longer time in diagnosis. With human-computer collobration, human experts achieved higher detection sensitivities than human-only or computer-only diagnosis. INTERPRETATION: The proposed FracNet provided increasing detection sensitivity of rib fractures with significantly decreased clinical time consumed, which established a clinically applicable method to assist the radiologist in clinical practice. FUNDING: A full list of funding bodies that contributed to this study can be found in the Acknowledgements section. The funding sources played no role in the study design; collection, analysis, and interpretation of data; writing of the report; or decision to submit the article for publication .


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Fraturas das Costelas/diagnóstico por imagem , Software , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Redes Neurais de Computação , Curva ROC , Reprodutibilidade dos Testes , Fraturas das Costelas/etiologia , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada por Raios X/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA