Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 14 de 14
1.
Heliyon ; 10(7): e28264, 2024 Apr 15.
Article En | MEDLINE | ID: mdl-38689962

Maize is a globally important cereal crop, however, maize leaf disease is one of the most common and devastating diseases that afflict it. Artificial intelligence methods face challenges in identifying and classifying maize leaf diseases due to variations in image quality, similarity among diseases, disease severity, limited dataset availability, and limited interpretability. To address these challenges, we propose a residual-based multi-scale network (MResNet) for classifying multi-type maize leaf diseases from maize images. MResNet consists of two residual subnets with different scales, enabling the model to detect diseases in maize leaf images at different scales. We further utilize a hybrid feature weight optimization method to optimize and fuse the feature mapping weights of two subnets. We validate MResNet on a maize leaf diseases dataset. MResNet achieves 97.45% accuracy. The performance of MResNet surpasses other state-of-the-art methods. Various experiments and two additional datasets confirm the generalization performance of our model. Furthermore, thermodynamic diagram analysis increases the interpretability of the model. This study provides technical support for the disease classification of agricultural plants.

2.
Heliyon ; 10(5): e27054, 2024 Mar 15.
Article En | MEDLINE | ID: mdl-38562500

Breast cancer is among the cancer types with the highest numbers of new cases. The study of this disease from a microscopic perspective has been a prominent research topic. Previous studies have shown that microRNAs (miRNAs) are closely linked to chromosomal instability (CIN). Correctly predicting CIN status from miRNAs can help to improve the survival of breast cancer patients. In this study, a joint global and local interpretation method called GL_XGBoost is proposed for predicting CIN status in breast cancer. GL_XGBoost integrates the eXtreme Gradient Boosting (XGBoost) and SHapley Additive exPlanation (SHAP) methods. XGBoost is used to predict CIN status from miRNA data, whereas SHAP is used to select miRNA features that have strong relationships with CIN. Furthermore, SHAP's rich visualization strategies enhance the interpretability of the entire model at the global and local levels. The performance of GL_XGBoost is validated on the TCGA-BRCA dataset, and it is shown to have an accuracy of 78.57% and an area under the curve value of 0.87. Rich visual analysis is used to explain the relationships between miRNAs and CIN status from different perspectives. Our study demonstrates an intuitive way of exploring the relationship between CIN and cancer from a microscopic perspective.

3.
IEEE J Biomed Health Inform ; 28(1): 110-121, 2024 Jan.
Article En | MEDLINE | ID: mdl-37294651

The incidence of breast cancer is increasing rapidly around the world. Accurate classification of the breast cancer subtype from hematoxylin and eosin images is the key to improve the precision of treatment. However, the high consistency of disease subtypes and uneven distribution of cancer cells seriously affect the performance of multi-classification methods. Furthermore, it is difficult to apply existing classification methods to multiple datasets. In this article, we propose a collaborative transfer network (CTransNet) for multi-classification of breast cancer histopathological images. CTransNet consists of a transfer learning backbone branch, a residual collaborative branch, and a feature fusion module. The transfer learning branch adopts the pre-trained DenseNet structure to extract image features from ImageNet. The residual branch extracts target features from pathological images in a collaborative manner. The feature fusion strategy of optimizing these two branches is used to train and fine-tune CTransNet. Experiments show that CTransNet achieves 98.29% classification accuracy on the public BreaKHis breast cancer dataset, exceeding the performance of state-of-the-art methods. Visual analysis is carried out under the guidance of oncologists. Based on the training parameters of the BreaKHis dataset, CTransNet achieves superior performance on other two public breast cancer datasets (breast-cancer-grade-ICT and ICIAR2018_BACH_Challenge), indicating that CTransNet has good generalization performance.


Breast Neoplasms , Humans , Female , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Neural Networks, Computer , Breast/pathology
4.
Heliyon ; 9(10): e20614, 2023 Oct.
Article En | MEDLINE | ID: mdl-37860562

The immunohistochemical technique (IHC) is widely used for evaluating diagnostic markers, but it can be expensive to obtain IHC-stained section. Translating the cheap and easily available hematoxylin and eosin (HE) images into IHC images provides a solution to this challenge. In this paper, we propose a multi-generator generative adversarial network (MGGAN) that can generate high-quality IHC images based on the HE of breast cancer. Our MGGAN approach combines the low-frequency and high-frequency components of the HE image to improve the translation of breast cancer image details. We use the multi-generator to extract semantic information and a U-shaped architecture and patch-based discriminator to collect and optimize the low-frequency and high-frequency components of an image. We also include a cross-entropy loss as a regularization term in the loss function to ensure consistency between the synthesized image and the real image. Our experimental and visualization results demonstrate that our method outperforms other state-of-the-art image synthesis methods in terms of both quantitative and qualitative analysis. Our approach provides a cost-effective and efficient solution for obtaining high-quality IHC images.

5.
Plant Methods ; 19(1): 77, 2023 Aug 01.
Article En | MEDLINE | ID: mdl-37528413

BACKGROUND: Grain count is crucial to wheat yield composition and estimating yield parameters. However, traditional manual counting methods are time-consuming and labor-intensive. This study developed an advanced deep learning technique for the segmentation counting model of wheat grains. This model has been rigorously tested on three distinct wheat varieties: 'Bainong 307', 'Xinmai 26', and 'Jimai 336', and it has achieved unprecedented predictive counting accuracy. METHOD: The images of wheat ears were taken with a smartphone at the late stage of wheat grain filling. We used image processing technology to preprocess and normalize the images to 480*480 pixels. A CBAM-HRNet wheat grain segmentation counting deep learning model based on the Convolutional Block Attention Module (CBAM) was constructed by combining deep learning, migration learning, and attention mechanism. Image processing algorithms and wheat grain texture features were used to build a grain counting and predictive counting model for wheat grains. RESULTS: The CBAM-HRNet model using the CBAM was the best for wheat grain segmentation. Its segmentation accuracy of 92.04%, the mean Intersection over Union (mIoU) of 85.21%, the category mean pixel accuracy (mPA) of 91.16%, and the recall rate of 91.16% demonstrate superior robustness compared to other models such as HRNet, PSPNet, DeeplabV3+ , and U-Net. Method I for spike count, which calculates twice the number of grains on one side of the spike to determine the total number of grains, demonstrates a coefficient of determination R2 of 0.85, a mean absolute error (MAE) of 1.53, and a mean relative error (MRE) of 2.91. In contrast, Method II for spike count involves summing the number of grains on both sides to determine the total number of grains, demonstrating a coefficient of determination R2 of 0.92, an MAE) of 1.15, and an MRE) of 2.09%. CONCLUSIONS: Image segmentation algorithm of the CBAM-HRNet wheat spike grain is a powerful solution that uses the CBAM to segment wheat spike grains and obtain richer semantic information. This model can effectively address the challenges of small target image segmentation and under-fitting problems in training. Additionally, the spike grain counting model can quickly and accurately predict the grain count of wheat, providing algorithmic support for efficient and intelligent wheat yield estimation.

6.
Front Plant Sci ; 14: 1200901, 2023.
Article En | MEDLINE | ID: mdl-37645464

Aphis gossypii Glover is a major insect pest in cotton production, which can cause yield reduction in severe cases. In this paper, we proposed the A. gossypii infestation monitoring method, which identifies the infestation level of A. gossypii at the cotton seedling stage, and can improve the efficiency of early warning and forecasting of A. gossypii, and achieve precise prevention and cure according to the predicted infestation level. We used smartphones to collect A. gossypii infestation images and compiled an infestation image data set. And then constructed, trained, and tested three different A. gossypii infestation recognition models based on Faster Region-based Convolutional Neural Network (R-CNN), You Only Look Once (YOLO)v5 and single-shot detector (SSD) models. The results showed that the YOLOv5 model had the highest mean average precision (mAP) value (95.7%) and frames per second (FPS) value (61.73) for the same conditions. In studying the influence of different image resolutions on the performance of the YOLOv5 model, we found that YOLOv5s performed better than YOLOv5x in terms of overall performance, with the best performance at an image resolution of 640×640 (mAP of 96.8%, FPS of 71.43). And the comparison with the latest YOLOv8s showed that the YOLOv5s performed better than the YOLOv8s. Finally, the trained model was deployed to the Android mobile, and the results showed that mobile-side detection was the best when the image resolution was 256×256, with an accuracy of 81.0% and FPS of 6.98. The real-time recognition system established in this study can provide technical support for infestation forecasting and precise prevention of A. gossypii.

7.
Heliyon ; 9(4): e15461, 2023 Apr.
Article En | MEDLINE | ID: mdl-37123973

Osteoarthritis (OA) is a progressive and chronic disease. Identifying the early stages of OA disease is important for the treatment and care of patients. However, most state-of-the-art methods only use single-modal data to predict disease status, so that these methods usually ignore complementary information in multi-modal data. In this study, we develop an integrated multi-modal learning method (MMLM) that uses an interpretable strategy to select and fuse clinical, imaging, and demographic features to classify the grade of early-stage knee OA disease. MMLM applies XGboost and ResNet50 to extract two heterogeneous features from the clinical data and imaging data, respectively. And then we integrate these extracted features with demographic data. To avoid the negative effects of redundant features in a direct integration of multiple features, we propose a L1-norm-based optimization method (MMLM) to regularize the inter-correlations among the multiple features. MMLM was assessed using the Osteoarthritis Initiative (OAI) data set with machine learning classifiers. Extensive experiments demonstrate that MMLM improves the performance of the classifiers. Furthermore, a visual analysis of the important features in the multimodal data verified the relations among the modalities when classifying the grade of knee OA disease.

8.
IEEE J Biomed Health Inform ; 27(7): 3384-3395, 2023 Jul.
Article En | MEDLINE | ID: mdl-37023156

Identifying the subtypes of low-grade glioma (LGG) can help prevent brain tumor progression and patient death. However, the complicated non-linear relationship and high dimensionality of 3D brain MRI limit the performance of machine learning methods. Therefore, it is important to develop a classification method that can overcome these limitations. This study proposes a self-attention similarity-guided graph convolutional network (SASG-GCN) that uses the constructed graphs to complete multi-classification (tumor-free (TF), WG, and TMG). In the pipeline of SASG-GCN, we use a convolutional deep belief network and a self-attention similarity-based method to construct the vertices and edges of the constructed graphs at 3D MRI level, respectively. The multi-classification experiment is performed in a two-layer GCN model. SASG-GCN is trained and evaluated on 402 3D MRI images which are produced from the TCGA-LGG dataset. Empirical tests demonstrate that SASG-GCN accurately classifies the subtypes of LGG. The accuracy of SASG-GCN achieves 93.62%, outperforming several other state-of-the-art classification methods. In-depth discussion and analysis reveal that the self-attention similarity-guided strategy improves the performance of SASG-GCN. The visualization revealed differences between different gliomas.


Brain Neoplasms , Glioma , Humans , Glioma/diagnostic imaging , Brain Neoplasms/diagnostic imaging , Brain , Head , Machine Learning
9.
Plant Methods ; 17(1): 49, 2021 May 03.
Article En | MEDLINE | ID: mdl-33941211

BACKGROUND: To accurately estimate winter wheat leaf area index (LAI) using unmanned aerial vehicle (UAV) hyperspectral imagery is crucial for crop growth monitoring, fertilization management, and development of precision agriculture. METHODS: The UAV hyperspectral imaging data, Analytical Spectral Devices (ASD) data, and LAI were simultaneously obtained at main growth stages (jointing stage, booting stage, and filling stage) of various winter wheat varieties under various nitrogen fertilizer treatments. The characteristic bands related to LAI were extracted from UAV hyperspectral data with different algorithms including first derivative (FD), successive projections algorithm (SPA), competitive adaptive reweighed sampling (CARS), and competitive adaptive reweighed sampling combined with successive projections algorithm (CARS_SPA). Furthermore, three modeling machine learning methods including partial least squares regression (PLSR), support vector machine regression (SVR), and extreme gradient boosting (Xgboost) were used to build LAI estimation models. RESULTS: The results show that the correlation coefficient between UAV and ASD hyperspectral data is greater than 0.99, indicating the UAV data can be used for estimation of wheat growth information. The LAI bands selected by using different algorithms were slightly different among the 15 models built in this study. The Xgboost model using nine consecutive characteristic bands selected by CARS_SPA algorithm as input was proved to have the best performance. This model yielded identical results of coefficient of determination (0.89) for both calibration set and validation set, indicating a high accuracy of this model. CONCLUSIONS: The Xgboost modeling method in combine with CARS_SPA algorithm can reduce input variables and improve the efficiency of model operation. The results provide reference and technical support for nondestructive and rapid estimation of winter wheat LAI by using UAV.

10.
Plant Methods ; 17(1): 51, 2021 May 17.
Article En | MEDLINE | ID: mdl-34001195

BACKGROUND: Fractional vegetation cover (FVC) is an important parameter for evaluating crop-growth status. Optical remote-sensing techniques combined with the pixel dichotomy model (PDM) are widely used to estimate cropland FVC with medium to high spatial resolution on the ground. However, PDM-based FVC estimation is limited by effects stemming from the variation of crop canopy chlorophyll content (CCC). To overcome this difficulty, we propose herein a "fan-shaped method" (FSM) that uses a CCC spectral index (SI) and a vegetation SI to create a two-dimensional scatter map in which the three vertices represent high-CCC vegetation, low-CCC vegetation, and bare soil. The FVC at each pixel is determined based on the spatial location of the pixel in the two-dimensional scatter map, which mitigates the effects of CCC on the PDM. To evaluate the accuracy of FSM estimates of the FVC, we analyze the spectra obtained from (a) the PROSAIL model and (b) a spectrometer mounted on an unmanned aerial vehicle platform. Specifically, we use both the proposed FSM and traditional remote-sensing FVC-estimation methods (both linear and nonlinear regression and PDM) to estimate soybean FVC. RESULTS: Field soybean CCC measurements indicate that (a) the soybean CCC increases continuously from the flowering growth stage to the later-podding growth stage, and then decreases with increasing crop growth stages, (b) the coefficient of variation of soybean CCC is very large in later growth stages (31.58-35.77%) and over all growth stages (26.14%). FVC samples with low CCC are underestimated by the PDM. Linear and nonlinear regression underestimates (overestimates) FVC samples with low (high) CCC. The proposed FSM depends less on CCC and is thus a robust method that can be used for multi-stage FVC estimation of crops with strongly varying CCC. CONCLUSIONS: Estimates and maps of FVC based on the later growth stages and on multiple growth stages should consider the variation of crop CCC. FSM can mitigates the effect of CCC by conducting a PDM at each CCC level. The FSM is a robust method that can be used to estimate FVC based on multiple growth stages where crop CCC varies greatly.

11.
Plant Methods ; 16: 106, 2020.
Article En | MEDLINE | ID: mdl-32782453

BACKGROUND: Wheat yield is influenced by the number of ears per unit area, and manual counting has traditionally been used to estimate wheat yield. To realize rapid and accurate wheat ear counting, K-means clustering was used for the automatic segmentation of wheat ear images captured by hand-held devices. The segmented data set was constructed by creating four categories of image labels: non-wheat ear, one wheat ear, two wheat ears, and three wheat ears, which was then was sent into the convolution neural network (CNN) model for training and testing to reduce the complexity of the model. RESULTS: The recognition accuracy of non-wheat, one wheat, two wheat ears, and three wheat ears were 99.8, 97.5, 98.07, and 98.5%, respectively. The model R 2 reached 0.96, the root mean square error (RMSE) was 10.84 ears, the macro F1-score and micro F1-score both achieved 98.47%, and the best performance was observed during late grain-filling stage (R 2 = 0.99, RMSE = 3.24 ears). The model could also be applied to the UAV platform (R 2 = 0.97, RMSE = 9.47 ears). CONCLUSIONS: The classification of segmented images as opposed to target recognition not only reduces the workload of manual annotation but also improves significantly the efficiency and accuracy of wheat ear counting, thus meeting the requirements of wheat yield estimation in the field environment.

12.
Sensors (Basel) ; 20(8)2020 Apr 16.
Article En | MEDLINE | ID: mdl-32316216

Fusarium head blight (FHB) is a major disease threatening worldwide wheat production. FHB is a short cycle disease and is highly destructive under conducive environments. To provide technical support for the rapid detection of the FHB disease, we proposed to develop a new Fusarium disease index (FDI) based on the spectral data of 374-1050 nm. This study was conducted through the analysis of reflectance spectral data of healthy and diseased wheat ears at the flowering and filling stages by hyperspectral imaging technology and the random forest method. The characteristic wavelengths selected were 570 nm and 678 nm for the late flowering stage, 565 nm and 661 nm for the early filling stage, 560 nm and 663 nm for the combined stage (combining both flowering and filling stages) by random forest. FDI at each stage was derived from the wavebands of each corresponding stage. Compared with other 16 existing spectral indices, FDI demonstrated a stronger ability to determine the severity of the FHB disease. Its determination coefficients (R2) values exceeded 0.90 and the RMSEs were less than 0.08 in the models for each stage. Furthermore, the model for the combined stage performed better when used at single growth stage, but its effect was weaker than that of the models for the two individual growth stages. Therefore, using FDI can provide a new tool to detect the FHB disease at different growth stages in wheat.


Fusarium/pathogenicity , Hyperspectral Imaging/methods , Image Processing, Computer-Assisted/methods , Plant Diseases , Triticum/microbiology , China , Crops, Agricultural/chemistry , Crops, Agricultural/growth & development , Crops, Agricultural/microbiology , Flowers , Hyperspectral Imaging/instrumentation , Triticum/chemistry , Triticum/growth & development
13.
Onco Targets Ther ; 8: 1157-64, 2015.
Article En | MEDLINE | ID: mdl-26045670

BACKGROUND: Fibroblast growth factor receptor 4 (FGFR4) has been proved to be correlated with progression and prognosis in many cancers. However, the significance of FGFR4 in non-small-cell lung cancer (NSCLC) is still not well elucidated. METHODS: In our experiment, we detected FGFR4 expression in 237 samples of NSCLC with immunohistochemistry, and further analyzed the correlation between FGFR4 and clinicopathologic features of NSCLC with chi-square test. Moreover, we evaluated the prognostic value of FGFR4 by Kaplan-Meier survival curve and Cox regression model. By regulating the expression of FGFR4 by overexpression or knockdown, we assessed the role of FGFR4 on NSCLC cell proliferation. RESULTS: FGFR4 expression was high in NSCLC (46.8%, 111/237). FGFR4 expression was significantly associated with tumor diameter (P=0.039). With univariate (P=0.009) and multivariate (P=0.002) analysis, FGFR4 was identified as an independent prognostic factor in NSCLC (P=0.009). Moreover, FGFR4 can promote the proliferation of NSCLC cell lines. CONCLUSION: FGFR4 is an independent prognostic biomarker in NSCLC. FGFR4 can accelerate the proliferation of NSCLC cell lines, indicating FGFR4 could be a potential drug target of NSCLC.

14.
Article En | MEDLINE | ID: mdl-24084481

The stability and photoelectron spectroscopy of the Ga(n)As2(n=1-9) clusters have been studied by using first-principles based on density functional theory (DFT). Our calculations reveal that the stabilities of the Ga(n)As2(n=1-9) clusters tend to increase with the increase of the number of total atoms. The calculated second-order difference values of the binding energy show a certain law of even-odd alternation, and the value of the even-numbered clusters is much larger than those of the odd-numbered ones. The energy gap Egap also shows a certain law of even-odd alternation, i.e. the Egap of the even-numbered clusters is much larger than the odd-numbered ones. The Egap of the clusters is between 0.2 eV and 0.6 eV, it will provide a reference for GaAs defect level research. The Ga(n)As2(n=1-9) clusters are potential to detect and emit THz radiation due to their ground-state vibration frequency are in THz range.


Arsenicals/chemistry , Gallium/chemistry , Models, Molecular , Photoelectron Spectroscopy , Quantum Theory
...