Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 170
Filter
1.
Acad Radiol ; 2024 Aug 26.
Article in English | MEDLINE | ID: mdl-39191563

ABSTRACT

RATIONALE AND OBJECTIVES: The structural lung features that characterize individuals with preserved ratio impaired spirometry (PRISm) that remain stable overtime are unknown. The objective of this study was to use machine learning models with computed tomography (CT) imaging to classify stable PRISm from stable controls and stable COPD and identify discriminative features. MATERIALS AND METHODS: A total of 596 participants that did not transition between control, PRISm and COPD groups at baseline and 3-year follow-up were evaluated: n = 274 with normal lung function (stable control), n = 22 stable PRISm, and n = 300 stable COPD. Investigated features included: quantitative CT (QCT) features (n = 34), such as total lung volume (%TLCCT) and percentage of ground glass and reticulation (%GG+Reticulationtexture), as well as Radiomic (n = 102) features, including varied intensity zone distribution grainy texture (GLDZMZDV). Logistic regression machine learning models were trained using various feature combinations (Base, Base+QCT, Base+Radiomic, Base+QCT+Radiomic). Model performances were evaluated using area under receiver operator curve (AUC) and comparisons between models were made using DeLong test; feature importance was ranked using Shapley Additive Explanations values. RESULTS: Machine learning models for all feature combinations achieved AUCs between 0.63-0.84 for stable PRISm vs. stable control, and 0.65-0.92 for stable PRISm vs. stable COPD classification. Models incorporating imaging features outperformed those trained solely on base features (p < 0.05). Compared to stable control and COPD, those with stable PRISm exhibited decreased %TLCCT and increased %GG+Reticulationtexture and GLDZMZDV. CONCLUSION: These findings suggest that reduced lung volumes, and elevated high-density and ground glass/reticulation patterns on CT imaging are associated with stable PRISm.

2.
J Sci Food Agric ; 2024 Aug 16.
Article in English | MEDLINE | ID: mdl-39149861

ABSTRACT

BACKGROUND: Leaf area index (LAI) is an important indicator for assessing plant growth and development, and is also closely related to photosynthesis in plants. The realization of rapid accurate estimation of crop LAI plays an important role in guiding farmland production. In study, the UAV-RGB technology was used to estimate LAI based on 65 winter wheat varieties at different fertility periods, the wheat varieties including farm varieties, main cultivars, new lines, core germplasm and foreign varieties. Color indices (CIs) and texture features were extracted from RGB images to determine their quantitative link to LAI. RESULTS: The results revealed that among the extracted image features, LAI exhibited a significant positive correlation with CIs (r = 0.801), whereas there was a significant negative correlation with texture features (r = -0.783). Furthermore, the visible atmospheric resistance index, the green-red vegetation index, the modified green-red vegetation index in the CIs, and the mean in the texture features demonstrated a strong correlation with the LAI with r > 0.8. With reference to the model input variables, the backpropagation neural network (BPNN) model of LAI based on the CIs and texture features (R2 = 0.730, RMSE = 0.691, RPD = 1.927) outperformed other models constructed by individual variables. CONCLUSION: This study offers a theoretical basis and technical reference for precise monitor on winter wheat LAI based on consumer-level UAVs. The BPNN model, incorporating CIs and texture features, proved to be superior in estimating LAI, and offered a reliable method for monitoring the growth of winter wheat. © 2024 Society of Chemical Industry.

3.
J Tissue Viability ; 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39084959

ABSTRACT

OBJECTIVE: This study aims to use the texture analysis of ultrasound images to distinguish the features of microchambers (a superficial thinner layer) and macrochambers (a deep thicker layer) in heel pads between the elderly with and without diabetes, so as to preliminarily explore whether texture analysis can identify the potential injury characteristics of deep tissue under the influence of diabetes before the obvious injury signs can be detected in clinical management. METHODS: Ultrasound images were obtained from the right heel (dominant leg) of eleven elderly people with diabetes (DM group) and eleven elderly people without diabetes (Non-DM group). The TekScan system was used to measure the peak plantar pressure (PPP) of each participant. Six gray-level co-occurrence matrix (GLCM) features including contrast, correlation, dissimilarity, energy, entropy, homogeneity were used to quantify texture changes in microchambers and macrochambers of heel pads. RESULTS: Significant differences in GLCM features (correlation, energy and entropy) of macrochambers were found between the two groups, while no significant differences in all GLCM features of microchambers were found between the two groups. No significant differences in PPP and tissue thickness in the heel region were observed between the two groups. CONCLUSIONS: In the elderly with diabetes who showed no significant differences in PPP and plantar tissue thickness compared to those without diabetes, several texture features of ultrasound images were found to be significantly different. Our finding indicates that texture features (correlation, energy and entropy) of macrochambers could be used for early detection of soft tissue damage associated with diabetes.

4.
EJNMMI Res ; 14(1): 60, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38965124

ABSTRACT

BACKGROUND: The aim of this study is to investigate the added value of combining tumour blood flow (BF) and metabolism parameters, including texture features, with clinical parameters to predict, at baseline, the pathological complete response (pCR) to neoadjuvant chemotherapy (NAC) in patients with newly diagnosed breast cancer (BC). METHODS: One hundred and twenty-eight BC patients underwent a 18F-FDG PET/CT before any treatment. Tumour BF and metabolism parameters were extracted from first-pass dynamic and delayed PET images, respectively. Standard and texture features were extracted from BF and metabolic images. Prediction of pCR was performed using logistic regression, random forest and support vector classification algorithms. Models were built using clinical (C), clinical and metabolic (C+M) and clinical, metabolic and tumour BF (C+M+BF) information combined. Algorithms were trained on 80% of the dataset and tested on the remaining 20%. Univariate and multivariate features selections were carried out on the training dataset. A total of 50 shuffle splits were performed. The analysis was carried out on the whole dataset (HER2 and Triple Negative (TN)), and separately in HER2 (N=76) and TN (N=52) tumours. RESULTS: In the whole dataset, the highest classification performances were observed for C+M models, significantly (p-value<0.01) higher than C models and better than C+M+BF models (mean balanced accuracy of 0.66, 0.61, and 0.64 respectively). For HER2 tumours, equal performances were noted for C and C+M models, with performances higher than C+M+BF models (mean balanced accuracy of 0.64, and 0.61 respectively). Regarding TN tumours, the best classification results were reported for C+M models, with better performances than C and C+M+BF models but not significantly (mean balanced accuracy of 0.65, 0.63, and 0.62 respectively). CONCLUSION: Baseline clinical data combined with global and texture tumour metabolism parameters assessed by 18F-FDG PET/CT provide a better prediction of pCR after NAC in patients with BC compared to clinical parameters alone for TN, and HER2 and TN tumours together. In contrast, adding BF parameters to the models did not improve prediction, regardless of the tumour subgroup analysed.

5.
BMC Med Imaging ; 24(1): 177, 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39030508

ABSTRACT

BACKGROUND: Cancer pathology shows disease development and associated molecular features. It provides extensive phenotypic information that is cancer-predictive and has potential implications for planning treatment. Based on the exceptional performance of computational approaches in the field of digital pathogenic, the use of rich phenotypic information in digital pathology images has enabled us to identify low-level gliomas (LGG) from high-grade gliomas (HGG). Because the differences between the textures are so slight, utilizing just one feature or a small number of features produces poor categorization results. METHODS: In this work, multiple feature extraction methods that can extract distinct features from the texture of histopathology image data are used to compare the classification outcomes. The successful feature extraction algorithms GLCM, LBP, multi-LBGLCM, GLRLM, color moment features, and RSHD have been chosen in this paper. LBP and GLCM algorithms are combined to create LBGLCM. The LBGLCM feature extraction approach is extended in this study to multiple scales using an image pyramid, which is defined by sampling the image both in space and scale. The preprocessing stage is first used to enhance the contrast of the images and remove noise and illumination effects. The feature extraction stage is then carried out to extract several important features (texture and color) from histopathology images. Third, the feature fusion and reduction step is put into practice to decrease the number of features that are processed, reducing the computation time of the suggested system. The classification stage is created at the end to categorize various brain cancer grades. We performed our analysis on the 821 whole-slide pathology images from glioma patients in the Cancer Genome Atlas (TCGA) dataset. Two types of brain cancer are included in the dataset: GBM and LGG (grades II and III). 506 GBM images and 315 LGG images are included in our analysis, guaranteeing representation of various tumor grades and histopathological features. RESULTS: The fusion of textural and color characteristics was validated in the glioma patients using the 10-fold cross-validation technique with an accuracy equals to 95.8%, sensitivity equals to 96.4%, DSC equals to 96.7%, and specificity equals to 97.1%. The combination of the color and texture characteristics produced significantly better accuracy, which supported their synergistic significance in the predictive model. The result indicates that the textural characteristics can be an objective, accurate, and comprehensive glioma prediction when paired with conventional imagery. CONCLUSION: The results outperform current approaches for identifying LGG from HGG and provide competitive performance in classifying four categories of glioma in the literature. The proposed model can help stratify patients in clinical studies, choose patients for targeted therapy, and customize specific treatment schedules.


Subject(s)
Algorithms , Brain Neoplasms , Color , Glioma , Neoplasm Grading , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Brain Neoplasms/classification , Glioma/diagnostic imaging , Glioma/pathology , Glioma/classification , Diagnosis, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods
6.
Plants (Basel) ; 13(11)2024 May 29.
Article in English | MEDLINE | ID: mdl-38891307

ABSTRACT

Efficient acquisition of crop leaf moisture information holds significant importance for agricultural production. This information provides farmers with accurate data foundations, enabling them to implement timely and effective irrigation management strategies, thereby maximizing crop growth efficiency and yield. In this study, unmanned aerial vehicle (UAV) multispectral technology was employed. Through two consecutive years of field experiments (2021-2022), soybean (Glycine max L.) leaf moisture data and corresponding UAV multispectral images were collected. Vegetation indices, canopy texture features, and randomly extracted texture indices in combination, which exhibited strong correlations with previous studies and crop parameters, were established. By analyzing the correlation between these parameters and soybean leaf moisture, parameters with significantly correlated coefficients (p < 0.05) were selected as input variables for the model (combination 1: vegetation indices; combination 2: texture features; combination 3: randomly extracted texture indices in combination; combination 4: combination of vegetation indices, texture features, and randomly extracted texture indices). Subsequently, extreme learning machine (ELM), extreme gradient boosting (XGBoost), and back propagation neural network (BPNN) were utilized to model the leaf moisture content. The results indicated that most vegetation indices exhibited higher correlation coefficients with soybean leaf moisture compared with texture features, while randomly extracted texture indices could enhance the correlation with soybean leaf moisture to some extent. RDTI, the random combination texture index, showed the highest correlation coefficient with leaf moisture at 0.683, with the texture combination being Variance1 and Correlation5. When combination 4 (combination of vegetation indices, texture features, and randomly extracted texture indices) was utilized as the input and the XGBoost model was employed for soybean leaf moisture monitoring, the highest level was achieved in this study. The coefficient of determination (R2) of the estimation model validation set reached 0.816, with a root-mean-square error (RMSE) of 1.404 and a mean relative error (MRE) of 1.934%. This study provides a foundation for UAV multispectral monitoring of soybean leaf moisture, offering valuable insights for rapid assessment of crop growth.

7.
J Orthop Surg Res ; 19(1): 367, 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38902712

ABSTRACT

OBJECTIVES: To develop an objective method based on texture analysis on MRI for diagnosis of congenital muscular torticollis (CMT). MATERIAL AND METHODS: The T1- and T2-weighted imaging, Q-dixon, and T1-mapping MRI data of 38 children with CMT were retrospectively analyzed. The region of interest (ROI) was manually drawn at the level of the largest cross-sectional area of the SCM on the affected side. MaZda software was used to obtain the texture features of the T2WI sequences of the ROI in healthy and affected SCM. A radiomics diagnostic model based on muscle texture features was constructed using logistic regression analysis. Fatty infiltration grade was calculated by hematoxylin and eosin staining, and fibrosis ratio by Masson staining. Correlation between the MRI parameters and pathological indicators was analyzed. RESULTS: There was positive correlation between fatty infiltration grade and mean value, standard deviation, and maximum value of the Q-dixon sequence of the affected SCM (correlation coefficients, 0.65, 0.59, and 0.58, respectively, P < 0.05).Three muscle texture features-S(2,2)SumAverg, S(3,3)SumVarnc, and T2WI extreme difference-were selected to construct the diagnostic model. The model showed significant diagnostic value for CMT (P < 0.05). The area under the curve of the multivariate conditional logistic regression model was 0.828 (95% confidence interval 0.735-0.922); the sensitivity was 0.684 and the specificity 0.868. CONCLUSION: The radiomics diagnostic model constructed using T2WI muscle texture features and MRI signal values appears to have good diagnostic efficiency. Q-dixon sequence can reflect the fatty infiltration grade of CMT.


Subject(s)
Magnetic Resonance Imaging , Severity of Illness Index , Torticollis , Humans , Torticollis/diagnostic imaging , Torticollis/congenital , Magnetic Resonance Imaging/methods , Male , Female , Retrospective Studies , Child, Preschool , Child , Infant , Neck Muscles/diagnostic imaging , Neck Muscles/pathology , Adolescent
8.
Laryngoscope ; 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38828682

ABSTRACT

OBJECTIVE: To extract texture features from vocal cord leukoplakia (VCL) images and establish a VCL risk stratification prediction model using machine learning (ML) techniques. METHODS: A total of 462 patients with pathologically confirmed VCL were retrospectively collected and divided into low-risk and high-risk groups. We use a 5-fold cross validation method to ensure the generalization ability of the model built using the included dataset and avoid overfitting. Totally 504 texture features were extracted from each laryngoscope image. After feature selection, 10 ML classifiers were utilized to construct the model. The SHapley Additive exPlanations (SHAP) was employed for feature analysis. To evaluate the model, accuracy, sensitivity, specificity, and the area under the receiver operating characteristic (ROC) curve (AUC) were utilized. In addition, the model was transformed into an online application for public use and further tested in an independent dataset with 52 cases of VCL. RESULTS: A total of 12 features were finally selected, random forest (RF) achieved the best model performance, the mean accuracy, sensitivity, specificity, and AUC of the 5-fold cross validation were 92.2 ± 4.1%, 95.6 ± 4.0%, 85.8 ± 5.8%, and 90.7 ± 4.9%, respectively. The result is much higher than the clinicians (AUC between 63.1% and 75.2%). The SHAP algorithm ranks the importance of 12 texture features to the model. The test results of the additional independent datasets were 92.3%, 95.7%, 90.0%, and 93.3%, respectively. CONCLUSION: The proposed VCL risk stratification prediction model, which has been developed into a public online prediction platform, may be applied in practical clinical work. LEVEL OF EVIDENCE: 3 Laryngoscope, 2024.

9.
J Cell Sci ; 137(20)2024 Oct 15.
Article in English | MEDLINE | ID: mdl-38738286

ABSTRACT

Plant protoplasts provide starting material for of inducing pluripotent cell masses that are competent for tissue regeneration in vitro, analogous to animal induced pluripotent stem cells (iPSCs). Dedifferentiation is associated with large-scale chromatin reorganisation and massive transcriptome reprogramming, characterised by stochastic gene expression. How this cellular variability reflects on chromatin organisation in individual cells and what factors influence chromatin transitions during culturing are largely unknown. Here, we used high-throughput imaging and a custom supervised image analysis protocol extracting over 100 chromatin features of cultured protoplasts. The analysis revealed rapid, multiscale dynamics of chromatin patterns with a trajectory that strongly depended on nutrient availability. Decreased abundance in H1 (linker histones) is hallmark of chromatin transitions. We measured a high heterogeneity of chromatin patterns indicating intrinsic entropy as a hallmark of the initial cultures. We further measured an entropy decline over time, and an antagonistic influence by external and intrinsic factors, such as phytohormones and epigenetic modifiers, respectively. Collectively, our study benchmarks an approach to understand the variability and evolution of chromatin patterns underlying plant cell reprogramming in vitro.


Subject(s)
Chromatin , Entropy , Induced Pluripotent Stem Cells , Chromatin/metabolism , Chromatin/genetics , Induced Pluripotent Stem Cells/metabolism , Induced Pluripotent Stem Cells/cytology , Protoplasts/metabolism , Cellular Reprogramming/genetics , Histones/metabolism , Histones/genetics , Plant Cells/metabolism , Epigenesis, Genetic
10.
Ultrasound Med Biol ; 50(7): 1034-1044, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38679514

ABSTRACT

To properly treat and care for hepatic cystic echinococcosis (HCE), it is essential to make an accurate diagnosis before treatment. OBJECTIVE: The objective of this study was to assess the diagnostic accuracy of computer-aided diagnosis techniques in classifying HCE ultrasound images into five subtypes. METHODS: A total of 1820 HCE ultrasound images collected from 967 patients were included in the study. A multi-kernel learning method was developed to learn the texture and depth features of the ultrasound images. Combined kernel functions were built-in Support Vector Machine (MK-SVM) for the classification work. The experimental results were evaluated using five-fold cross-validation. Finally, our approach was compared with three other machine learning algorithms: the decision tree classifier, random forest, and gradient boosting decision tree. RESULTS: Among all the methods used in the study, the MK-SVM achieved the highest accuracy of 96.6% on the fused feature set. CONCLUSION: The multi-kernel learning method effectively learns different image features from ultrasound images by utilizing various kernels. The MK-SVM method, which combines the learning of texture features and depth features separately, has significant application value in HCE classification tasks.


Subject(s)
Echinococcosis, Hepatic , Machine Learning , Ultrasonography , Humans , Echinococcosis, Hepatic/diagnostic imaging , Ultrasonography/methods , Male , Liver/diagnostic imaging , Female , Adult , Middle Aged , Support Vector Machine , Reproducibility of Results , Algorithms , Aged , Image Interpretation, Computer-Assisted/methods
11.
PeerJ Comput Sci ; 10: e1927, 2024.
Article in English | MEDLINE | ID: mdl-38660180

ABSTRACT

Textures provide a powerful segmentation and object detection cue. Recent research has shown that deep convolutional nets like Visual Geometry Group (VGG) and ResNet perform well in non-stationary texture datasets. Non-stationary textures have local structures that change from one region of the image to the other. This is consistent with the view that deep convolutional networks are good at detecting local microstructures disguised as textures. However, stationary textures are textures that have statistical properties that are constant or slow varying over the entire region are not well detected by deep convolutional networks. This research demonstrates that simple seven-layer convolutional networks can obtain better results than deep networks using a novel convolutional technique called orthogonal convolution with pre-calculated regional features using grey level co-occurrence matrix. We obtained an average of 8.5% improvement in accuracy in texture recognition on the Outex dataset over GoogleNet, ResNet, VGG and AlexNet.

12.
Explor Target Antitumor Ther ; 5(1): 74-84, 2024.
Article in English | MEDLINE | ID: mdl-38464383

ABSTRACT

Aim: To investigate magnetic resonance imaging (MRI)-based peritumoral texture features as prognostic indicators of survival in patients with colorectal liver metastasis (CRLM). Methods: From 2007-2015, forty-eight patients who underwent MRI within 3 months prior to initiating treatment for CRLM were identified. Clinicobiological prognostic variables were obtained from electronic medical records. Ninety-four metastatic hepatic lesions were identified on T1-weighted post-contrast images and volumetrically segmented. A total of 112 radiomic features (shape, first-order, texture) were derived from a 10 mm region surrounding each segmented tumor. A random forest model was applied, and performance was tested by receiver operating characteristic (ROC). Kaplan-Meier analysis was utilized to generate the survival curves. Results: Forty-eight patients (male:female = 23:25, age 55.3 years ± 18 years) were included in the study. The median lesion size was 25.73 mm (range 8.5-103.8 mm). Microsatellite instability was low in 40.4% (38/94) of tumors, with Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS) mutation detected in 68 out of 94 (72%) tumors. The mean survival was 35 months ± 21 months, and local disease progression was observed in 35.5% of patients. Univariate regression analysis identified 42 texture features [8 first order, 5 gray level dependence matrix (GLDM), 5 gray level run time length matrix (GLRLM), 5 gray level size zone matrix (GLSZM), 2 neighboring gray tone difference matrix (NGTDM), and 17 gray level co-occurrence matrix (GLCM)] independently associated with metastatic disease progression (P < 0.03). The random forest model achieved an area under the curve (AUC) of 0.88. Conclusions: MRI-based peritumoral heterogeneity features may serve as predictive biomarkers for metastatic disease progression and patient survival in CRLM.

13.
Neuroscience ; 546: 178-187, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38518925

ABSTRACT

Automatic abnormality identification of brachial plexus (BP) from normal magnetic resonance imaging to localize and identify a neurologic injury in clinical practice (MRI) is still a novel topic in brachial plexopathy. This study developed and evaluated an approach to differentiate abnormal BP with artificial intelligence (AI) over three commonly used MRI sequences, i.e. T1, FLUID sensitive and post-gadolinium sequences. A BP dataset was collected by radiological experts and a semi-supervised artificial intelligence method was used to segment the BP (based on nnU-net). Hereafter, a radiomics method was utilized to extract 107 shape and texture features from these ROIs. From various machine learning methods, we selected six widely recognized classifiers for training our Brachial plexus (BP) models and assessing their efficacy. To optimize these models, we introduced a dynamic feature selection approach aimed at discarding redundant and less informative features. Our experimental findings demonstrated that, in the context of identifying abnormal BP cases, shape features displayed heightened sensitivity compared to texture features. Notably, both the Logistic classifier and Bagging classifier outperformed other methods in our study. These evaluations illuminated the exceptional performance of our model trained on FLUID-sensitive sequences, which notably exceeded the results of both T1 and post-gadolinium sequences. Crucially, our analysis highlighted that both its classification accuracies and AUC score (area under the curve of receiver operating characteristics) over FLUID-sensitive sequence exceeded 90%. This outcome served as a robust experimental validation, affirming the substantial potential and strong feasibility of integrating AI into clinical practice.


Subject(s)
Artificial Intelligence , Brachial Plexus , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Brachial Plexus/diagnostic imaging , Brachial Plexus Neuropathies/diagnostic imaging , Machine Learning , Female , Male , Adult
15.
Biomedicines ; 12(3)2024 Feb 20.
Article in English | MEDLINE | ID: mdl-38540086

ABSTRACT

The aim of our study was to predict the occurrence of distant metastases in non-small-cell lung cancer (NSCLC) patients using machine learning methods and texture analysis of 18F-labeled 2-deoxy-d-glucose Positron Emission Tomography/Computed Tomography {[18F]FDG PET/CT} images. In this retrospective and single-center study, we evaluated 79 patients with advanced NSCLC who had undergone [18F]FDG PET/CT scan at diagnosis before any therapy. Patients were divided into two independent training (n = 44) and final testing (n = 35) cohorts. Texture features of primary tumors and lymph node metastases were extracted from [18F]FDG PET/CT images using the LIFEx program. Six machine learning methods were applied to the training dataset using the entire panel of features. Dedicated selection methods were used to generate different combinations of five features. The performance of selected machine learning methods applied to the different combinations of features was determined using accuracy, the confusion matrix, receiver operating characteristic (ROC) curves, and area under the curve (AUC). A total of 104 and 78 lesions were analyzed in the training and final testing cohorts, respectively. The support vector machine (SVM) and decision tree methods showed the highest accuracy in the training cohort. Seven combinations of five features were obtained and introduced in the models and subsequently applied to the training and final testing cohorts using the SVM and decision tree. The accuracy and the AUC of the decision tree method were higher than those obtained with the SVM in the final testing cohort. The best combination of features included shape sphericity, gray level run length matrix_run length non-uniformity (GLRLM_RLNU), Total Lesion Glycolysis (TLG), Metabolic Tumor Volume (MTV), and shape compacity. The combination of these features with the decision tree method could predict the occurrence of distant metastases with an accuracy of 74.4% and an AUC of 0.63 in NSCLC patients.

16.
Insects ; 15(3)2024 Mar 04.
Article in English | MEDLINE | ID: mdl-38535368

ABSTRACT

Erannis jacobsoni Djak (Lepidoptera, Geometridae) is a leaf-feeding pest unique to Mongolia. Outbreaks of this pest can cause larch needles to shed slowly from the top until they die, leading to a serious imbalance in the forest ecosystem. In this work, to address the need for the low-cost, fast, and effective identification of this pest, we used field survey indicators and UAV images of larch forests in Binder, Khentii, Mongolia, a typical site of Erannis jacobsoni Djak pest outbreaks, as the base data, calculated relevant multispectral and red-green-blue (RGB) features, used a successive projections algorithm (SPA) to extract features that are sensitive to the level of pest damage, and constructed a recognition model of Erannis jacobsoni Djak pest damage by combining patterns in the RGB vegetation indices and texture features (RGBVI&TF) with the help of random forest (RF) and convolutional neural network (CNN) algorithms. The results were compared and evaluated with multispectral vegetation indices (MSVI) to explore the potential of UAV RGB images in identifying needle pests. The results show that the sensitive features extracted based on SPA can adequately capture the changes in the forest appearance parameters such as the leaf loss rate and the colour of the larch canopy under pest damage conditions and can be used as effective input variables for the model. The RGBVI&TF-RF440 and RGBVI&TF-CNN740 models have the best performance, with their overall accuracy reaching more than 85%, which is a significant improvement compared with that of the RGBVI model, and their accuracy is similar to that of the MSVI model. This low-cost and high-efficiency method can excel in the identification of Erannis jacobsoni Djak-infested regions in small areas and can provide an important experimental theoretical basis for subsequent large-scale forest pest monitoring with a high spatiotemporal resolution.

17.
Heliyon ; 10(4): e26192, 2024 Feb 29.
Article in English | MEDLINE | ID: mdl-38404820

ABSTRACT

Machine learning offers significant potential for lung cancer detection, enabling early diagnosis and potentially improving patient outcomes. Feature extraction remains a crucial challenge in this domain. Combining the most relevant features can further enhance detection accuracy. This study employed a hybrid feature extraction approach, which integrates both Gray-level co-occurrence matrix (GLCM) with Haralick and autoencoder features with an autoencoder. These features were subsequently fed into supervised machine learning methods. Support Vector Machine (SVM) Radial Base Function (RBF) and SVM Gaussian achieved perfect performance measures, while SVM polynomial produced an accuracy of 99.89% when utilizing GLCM with an autoencoder, Haralick, and autoencoder features. SVM Gaussian achieved an accuracy of 99.56%, while SVM RBF achieved an accuracy of 99.35% when utilizing GLCM with Haralick features. These results demonstrate the potential of the proposed approach for developing improved diagnostic and prognostic lung cancer treatment planning and decision-making systems.

18.
Sci Total Environ ; 923: 171181, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38402987

ABSTRACT

The mapping of impervious surfaces using remote sensing techniques offer essential technical support for sustainable development objectives and safeguard the environment. In this study, we developed an automated method without training samples for mapping impervious surfaces using texture features. The different aggregated impervious surface patterns and distributions in study areas of Site A-C in China (Beijing, Huainan, Jinhua) were considered. The Site D-E in Dubai and Tehran, surrounded with deserts in arid areas. They were selected to develop and evaluate the performance of the proposed automated method. The texture features of the Contrast, Gabor wavelets, and secondary texture extraction (Con_Gabor) derived from Sentinel-2 images at each site were used to construct the three-dimensional texture features (3DTF) of impervious surfaces. The 3DTF-combined K-means classifier was used to automatically map the impervious surfaces. The results showed that the overall accuracies of mapping impervious surface were 91.15 %, 89.75 %, and 91.90 % in Site A-C. The overall accuracies of mapping impervious surface were 90.95 %, 91.45 % and 88.23 % in rural areas. The distributions of impervious surface on automated method, GHS-BUILT-S and ESA WorldCover were similar in study areas. The automated method for mapping impervious surfaces performed as well as the artificial neural network (ANN) and Random Forest (RF), and the advantage of not requiring training samples. The automated method was tested in the in Dubai and Tehran. The overall accuracies of the automatic method for mapping impervious surfaces >89 % at Site D-E, and >88 % at rural area. In addition, the 3DTF was proved as the simplest and most effective feature combination to map impervious surfaces. The impervious surface mapped using the automated method was robust across bands, seasons and sensors. However, further evaluation is necessary to assess the effectiveness of automated methods using high spatial resolution images for mapping impervious surface in complex areas.

19.
Comput Methods Programs Biomed ; 244: 108006, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38215580

ABSTRACT

OBJECTION: The aim of this study is to develop an early-warning model for identifying high-risk populations of pneumoconiosis by combining lung 3D images and radiomics lung texture features. METHODS: A retrospective study was conducted, including 600 dust-exposed workers and 300 confirmed pneumoconiosis patients. Chest computed tomography (CT) images were divided into a training set and a test set in a 2:1 ratio. Whole-lung segmentation was performed using deep learning models for feature extraction of radiomics. Two feature selection algorithms and five classification models were used. The optimal model was selected using a 10-fold cross-validation strategy, and the calibration curve and decision curve were evaluated. To verify the applicability of the model, the diagnostic efficiency and accuracy between the model and human interpretation were compared. Additionally, the risk probabilities for different risk groups defined by the model were compared at different time intervals. RESULTS: Four radiomics features were ultimately used to construct the predictive model. The logistic regression model was the most stable in both the training set and testing set, with an area under curve (AUC) of 0.964 (95 % confidence interval [CI], 0.950-0.976) and 0.947 (95 %CI, 0.925-0.964). In the training and testing sets, the Brier scores were 0.092 and 0.14, respectively, with threshold probability ranges of 2 %-99 % and 2 %-85 %. These results indicate that the model exhibits good calibration and clinical benefit. The comparison between the model and human interpretation showed that the model was not inferior in terms of diagnostic efficiency and accuracy. Additionally, the high-risk population identified by the model was diagnosed as pneumoconiosis two years later. CONCLUSION: This study provides a meticulous and quantifiable method for detecting and assessing the risk of pneumoconiosis, building upon accurate diagnosis. Employing risk scoring and probability estimation, not only enhances the efficiency of diagnostic physicians but also provides a valuable reference for controlling the occurrence of pneumoconiosis.


Subject(s)
Deep Learning , Pneumoconiosis , Humans , Radiomics , Retrospective Studies , Pneumoconiosis/diagnostic imaging , Lung/diagnostic imaging
20.
Dentomaxillofac Radiol ; 53(2): 115-126, 2024 Feb 08.
Article in English | MEDLINE | ID: mdl-38166356

ABSTRACT

OBJECTIVES: The objectives of this study are to explore and evaluate the automation of anatomical landmark localization in cephalometric images using machine learning techniques, with a focus on feature extraction and combinations, contextual analysis, and model interpretability through Shapley Additive exPlanations (SHAP) values. METHODS: We conducted extensive experimentation on a private dataset of 300 lateral cephalograms to thoroughly study the annotation results obtained using pixel feature descriptors including raw pixel, gradient magnitude, gradient direction, and histogram-oriented gradient (HOG) values. The study includes evaluation and comparison of these feature descriptions calculated at different contexts namely local, pyramid, and global. The feature descriptor obtained using individual combinations is used to discern between landmark and nonlandmark pixels using classification method. Additionally, this study addresses the opacity of LGBM ensemble tree models across landmarks, introducing SHAP values to enhance interpretability. RESULTS: The performance of feature combinations was assessed using metrics like mean radial error, standard deviation, success detection rate (SDR) (2 mm), and test time. Remarkably, among all the combinations explored, both the HOG and gradient direction operations demonstrated significant performance across all context combinations. At the contextual level, the global texture outperformed the others, although it came with the trade-off of increased test time. The HOG in the local context emerged as the top performer with an SDR of 75.84% compared to others. CONCLUSIONS: The presented analysis enhances the understanding of the significance of different features and their combinations in the realm of landmark annotation but also paves the way for further exploration of landmark-specific feature combination methods, facilitated by explainability.


Subject(s)
Anatomic Landmarks , Cephalometry , Humans , Cephalometry/methods , Machine Learning , Data Curation
SELECTION OF CITATIONS
SEARCH DETAIL