Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 875
Filter
1.
Sci Rep ; 14(1): 18124, 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39103484

ABSTRACT

Printed Circuit Boards (PCBs) are key devices for the modern-day electronic technologies. During the production of these boards, defects may occur. Several methods have been proposed to detect PCB defects. However, detecting significantly smaller and visually unrecognizable defects has been a long-standing challenge. The existing two-stage and multi-stage object detectors that use only one layer of the backbone, such as Resnet's third layer ( C 4 ) or fourth layer ( C 5 ), suffer from low accuracy, and those that use multi-layer feature maps extractors, such as Feature Pyramid Network (FPN), incur higher computational cost. Founded by these challenges, we propose a robust, less computationally intensive, and plug-and-play Attentive Context and Semantic Enhancement Module (ACASEM) for two-stage and multi-stage detectors to enhance PCB defects detection. This module consists of two main parts, namely adaptable feature fusion and attention sub-modules. The proposed model, ACASEM, takes in feature maps from different layers of the backbone and fuses them in a way that enriches the resulting feature maps with more context and semantic information. We test our module with state-of-the-art two-stage object detectors, Faster R-CNN and Double-Head R-CNN, and with multi-stage Cascade R-CNN detector on DeepPCB and Augmented PCB Defect datasets. Empirical results demonstrate improvement in the accuracy of defect detection.

2.
Front Plant Sci ; 15: 1406593, 2024.
Article in English | MEDLINE | ID: mdl-39109070

ABSTRACT

Color-changing melons are a kind of cucurbit plant that combines ornamental and food. With the aim of increasing the efficiency of harvesting Color-changing melon fruits while reducing the deployment cost of detection models on agricultural equipment, this study presents an improved YOLOv8s network approach that uses model pruning and knowledge distillation techniques. The method first merges Dilated Wise Residual (DWR) and Dilated Reparam Block (DRB) to reconstruct the C2f module in the Backbone for better feature fusion. Next, we designed a multilevel scale fusion feature pyramid network (HS-PAN) to enrich semantic information and strengthen localization information to enhance the detection of Color-changing melon fruits with different maturity levels. Finally, we used Layer-Adaptive Sparsity Pruning and Block-Correlation Knowledge Distillation to simplify the model and recover its accuracy. In the Color-changing melon images dataset, the mAP0.5 of the improved model reaches 96.1%, the detection speed is 9.1% faster than YOLOv8s, the number of Params is reduced from 6.47M to 1.14M, the number of computed FLOPs is reduced from 22.8GFLOPs to 7.5GFLOPs. The model's size has also decreased from 12.64MB to 2.47MB, and the performance of the improved YOLOv8 is significantly more outstanding than other lightweight networks. The experimental results verify the effectiveness of the proposed method in complex scenarios, which provides a reference basis and technical support for the subsequent automatic picking of Color-changing melons.

3.
MethodsX ; 13: 102839, 2024 Dec.
Article in English | MEDLINE | ID: mdl-39105091

ABSTRACT

Melanoma is a type of skin cancer that poses significant health risks and requires early detection for effective treatment. This study proposing a novel approach that integrates a transformer-based model with hand-crafted texture features and Gray Wolf Optimization, aiming to enhance efficiency of melanoma classification. Preprocessing involves standardizing image dimensions and enhancing image quality through median filtering techniques. Texture features, including GLCM and LBP, are extracted to capture spatial patterns indicative of melanoma. The GWO algorithm is applied to select the most discriminative features. A transformer-based decoder is then employed for classification, leveraging attention mechanisms to capture contextual dependencies. The experimental validation on the HAM10000 dataset and ISIC2019 dataset showcases the effectiveness of the proposed methodology. The transformer-based model, integrated with hand-crafted texture features and guided by Gray Wolf Optimization, achieves outstanding results. The results showed that the proposed method performed well in melanoma detection tasks, achieving an accuracy and F1-score of 99.54% and 99.11% on the HAM10000 dataset, and an accuracy of 99.47%, and F1-score of 99.25% on the ISIC2019 dataset. • We use the concepts of LBP and GLCM to extract features from the skin lesion images. • The Gray Wolf Optimization (GWO) algorithm is employed for feature selection. • A decoder based on Transformers is utilized for melanoma classification.

4.
Skin Res Technol ; 30(8): e13783, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39113617

ABSTRACT

BACKGROUND: In recent years, the increasing prevalence of skin cancers, particularly malignant melanoma, has become a major concern for public health. The development of accurate automated segmentation techniques for skin lesions holds immense potential in alleviating the burden on medical professionals. It is of substantial clinical importance for the early identification and intervention of skin cancer. Nevertheless, the irregular shape, uneven color, and noise interference of the skin lesions have presented significant challenges to the precise segmentation. Therefore, it is crucial to develop a high-precision and intelligent skin lesion segmentation framework for clinical treatment. METHODS: A precision-driven segmentation model for skin cancer images is proposed based on the Transformer U-Net, called BiADATU-Net, which integrates the deformable attention Transformer and bidirectional attention blocks into the U-Net. The encoder part utilizes deformable attention Transformer with dual attention block, allowing adaptive learning of global and local features. The decoder part incorporates specifically tailored scSE attention modules within skip connection layers to capture image-specific context information for strong feature fusion. Additionally, deformable convolution is aggregated into two different attention blocks to learn irregular lesion features for high-precision prediction. RESULTS: A series of experiments are conducted on four skin cancer image datasets (i.e., ISIC2016, ISIC2017, ISIC2018, and PH2). The findings show that our model exhibits satisfactory segmentation performance, all achieving an accuracy rate of over 96%. CONCLUSION: Our experiment results validate the proposed BiADATU-Net achieves competitive performance supremacy compared to some state-of-the-art methods. It is potential and valuable in the field of skin lesion segmentation.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Melanoma/diagnostic imaging , Melanoma/pathology , Algorithms , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods , Dermoscopy/methods , Deep Learning
5.
Front Plant Sci ; 15: 1415884, 2024.
Article in English | MEDLINE | ID: mdl-39119504

ABSTRACT

The pollination process of kiwifruit flowers plays a crucial role in kiwifruit yield. Achieving accurate and rapid identification of the four stages of kiwifruit flowers is essential for enhancing pollination efficiency. In this study, to improve the efficiency of kiwifruit pollination, we propose a novel full-stage kiwifruit flower pollination detection algorithm named KIWI-YOLO, based on the fusion of frequency-domain features. Our algorithm leverages frequency-domain and spatial-domain information to improve recognition of contour-detailed features and integrates decision-making with contextual information. Additionally, we incorporate the Bi-Level Routing Attention (BRA) mechanism with C3 to enhance the algorithm's focus on critical areas, resulting in accurate, lightweight, and fast detection. The algorithm achieves a m A P 0.5 of 91.6% with only 1.8M parameters, the AP of the Female class and the Male class reaches 95% and 93.5%, which is an improvement of 3.8%, 1.2%, and 6.2% compared with the original algorithm. Furthermore, the Recall and F1-score of the algorithm are enhanced by 5.5% and 3.1%, respectively. Moreover, our model demonstrates significant advantages in detection speed, taking only 0.016s to process an image. The experimental results show that the algorithmic model proposed in this study can better assist the pollination of kiwifruit in the process of precision agriculture production and help the development of the kiwifruit industry.

6.
J Imaging Inform Med ; 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39103564

ABSTRACT

Retinal vessel segmentation is crucial for the diagnosis of ophthalmic and cardiovascular diseases. However, retinal vessels are densely and irregularly distributed, with many capillaries blending into the background, and exhibit low contrast. Moreover, the encoder-decoder-based network for retinal vessel segmentation suffers from irreversible loss of detailed features due to multiple encoding and decoding, leading to incorrect segmentation of the vessels. Meanwhile, the single-dimensional attention mechanisms possess limitations, neglecting the importance of multidimensional features. To solve these issues, in this paper, we propose a detail-enhanced attention feature fusion network (DEAF-Net) for retinal vessel segmentation. First, the detail-enhanced residual block (DERB) module is proposed to strengthen the capacity for detailed representation, ensuring that intricate features are efficiently maintained during the segmentation of delicate vessels. Second, the multidimensional collaborative attention encoder (MCAE) module is proposed to optimize the extraction of multidimensional information. Then, the dynamic decoder (DYD) module is introduced to preserve spatial information during the decoding process and reduce the information loss caused by upsampling operations. Finally, the proposed detail-enhanced feature fusion (DEFF) module composed of DERB, MCAE and DYD modules fuses feature maps from both encoding and decoding and achieves effective aggregation of multi-scale contextual information. The experiments conducted on the datasets of DRIVE, CHASEDB1, and STARE, achieving Sen of 0.8305, 0.8784, and 0.8654, and AUC of 0.9886, 0.9913, and 0.9911 on DRIVE, CHASEDB1, and STARE, respectively, demonstrate the performance of our proposed network, particularly in the segmentation of fine retinal vessels.

7.
Sci Rep ; 14(1): 18439, 2024 08 08.
Article in English | MEDLINE | ID: mdl-39117714

ABSTRACT

Accurate diagnosis of white blood cells from cytopathological images is a crucial step in evaluating leukaemia. In recent years, image classification methods based on fully convolutional networks have drawn extensive attention and achieved competitive performance in medical image classification. In this paper, we propose a white blood cell classification network called ResNeXt-CC for cytopathological images. First, we transform cytopathological images from the RGB color space to the HSV color space so as to precisely extract the texture features, color changes and other details of white blood cells. Second, since cell classification primarily relies on distinguishing local characteristics, we design a cross-layer deep-feature fusion module to enhance our ability to extract discriminative information. Third, the efficient attention mechanism based on the ECANet module is used to promote the feature extraction capability of cell details. Finally, we combine the modified softmax loss function and the central loss function to train the network, thereby effectively addressing the problem of class imbalance and improving the network performance. The experimental results on the C-NMC 2019 dataset show that our proposed method manifests obvious advantages over the existing classification methods, including ResNet-50, Inception-V3, Densenet121, VGG16, Cross ViT, Token-to-Token ViT, Deep ViT, and simple ViT about 5.5-20.43% accuracy, 3.6-23.56% F1-score, 3.5-25.71% AUROC and 8.1-36.98% specificity, respectively.


Subject(s)
Leukocytes , Humans , Leukocytes/cytology , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Leukemia/pathology , Leukemia/classification , Algorithms , Deep Learning
8.
Sci Rep ; 14(1): 18579, 2024 Aug 10.
Article in English | MEDLINE | ID: mdl-39127852

ABSTRACT

The effective detection of safflower in the field is crucial for implementing automated visual navigation and harvesting systems. Due to the small physical size of safflower clusters, their dense spatial distribution, and the complexity of field scenes, current target detection technologies face several challenges in safflower detection, such as insufficient accuracy and high computational demands. Therefore, this paper introduces an improved safflower target detection model based on YOLOv5, termed Safflower-YOLO (SF-YOLO). This model employs Ghost_conv to replace traditional convolution blocks in the backbone network, significantly enhancing computational efficiency. Furthermore, the CBAM attention mechanism is integrated into the backbone network, and a combined L C I O U + N W D loss function is introduced to allow for more precise feature extraction, enhanced adaptive fusion capabilities, and accelerated loss convergence. Anchor boxes, updated through K-means clustering, are used to replace the original anchors, enabling the model to better adapt to the multi-scale information of safflowers in the field. Data augmentation techniques such as Gaussian blur, noise addition, sharpening, and channel shuffling are applied to the dataset to maintain robustness against variations in lighting, noise, and visual angles. Experimental results demonstrate that SF-YOLO surpasses the original YOLOv5s model, with reductions in GFlops and Params from 15.8 to 13.2 G and 7.013 to 5.34 M, respectively, representing decreases of 16.6% and 23.9%. Concurrently, SF-YOLO's mAP0.5 increases by 1.3%, reaching 95.3%. This work enhances the accuracy of safflower detection in complex agricultural environments, providing a reference for subsequent autonomous visual navigation and automated non-destructive harvesting technologies in safflower operations.

9.
BMC Bioinformatics ; 25(1): 275, 2024 Aug 23.
Article in English | MEDLINE | ID: mdl-39179993

ABSTRACT

BACKGROUND: The rise of network pharmacology has led to the widespread use of network-based computational methods in predicting drug target interaction (DTI). However, existing DTI prediction models typically rely on a limited amount of data to extract drug and target features, potentially affecting the comprehensiveness and robustness of features. In addition, although multiple networks are used for DTI prediction, the integration of heterogeneous information often involves simplistic aggregation and attention mechanisms, which may impose certain limitations. RESULTS: MSH-DTI, a deep learning model for predicting drug-target interactions, is proposed in this paper. The model uses self-supervised learning methods to obtain drug and target structure features. A Heterogeneous Interaction-enhanced Feature Fusion Module is designed for multi-graph construction, and the graph convolutional networks are used to extract node features. With the help of an attention mechanism, the model focuses on the important parts of different features for prediction. Experimental results show that the AUROC and AUPR of MSH-DTI are 0.9620 and 0.9605 respectively, outperforming other models on the DTINet dataset. CONCLUSION: The proposed MSH-DTI is a helpful tool to discover drug-target interactions, which is also validated through case studies in predicting new DTIs.


Subject(s)
Deep Learning , Supervised Machine Learning , Computational Biology/methods , Network Pharmacology/methods
10.
Comput Biol Med ; 181: 109048, 2024 Aug 24.
Article in English | MEDLINE | ID: mdl-39182368

ABSTRACT

Neuropeptides are the most ubiquitous neurotransmitters in the immune system, regulating various biological processes. Neuropeptides play a significant role for the discovery of new drugs and targets for nervous system disorders. Traditional experimental methods for identifying neuropeptides are time-consuming and costly. Although several computational methods have been developed to predict the neuropeptides, the accuracy is still not satisfactory due to the representability of the extracted features. In this work, we propose an efficient and interpretable model, NeuroPpred-SHE, for predicting neuropeptides by selecting the optimal feature subset from both hand-crafted features and embeddings of a protein language model. Specially, we first employed a pre-trained T5 protein language model to extract embedding features and twelve other encoding methods to extract hand-crafted features from peptide sequences, respectively. Secondly, we fused both embedding features and hand-crafted features to enhance the feature representability. Thirdly, we utilized random forest (RF), Max-Relevance and Min-Redundancy (mRMR) and eXtreme Gradient Boosting (XGBoost) methods to select the optimal feature subset from the fused features. Finally, we employed five machine learning methods (GBDT, XGBoost, SVM, MLP, and LightGBM) to build the models. Our results show that the model based on GBDT achieves the best performance. Furthermore, our final model was compared with other state-of-the-art methods on an independent test set, the results indicate that our model achieves an AUROC of 97.8 % which is higher than all the other state-of-the-art predictors. Our model is available at: https://github.com/wenjean/NeuroPpred-SHE.

11.
Front Plant Sci ; 15: 1416940, 2024.
Article in English | MEDLINE | ID: mdl-39184581

ABSTRACT

Introduction: Effective pest management is important during the natural growth phases of cotton in the wild. As cotton fields are infested with "tiny pests" (smaller than 32×32 pixels) and "very tiny pests" (smaller than 16×16 pixels) during growth, making it difficult for common object detection models to accurately detect and fail to make sound agricultural decisions. Methods: In this study, we proposed a framework for detecting "tiny pests" and "very tiny pests" in wild cotton fields, named SRNet-YOLO. SRNet-YOLO includes a YOLOv8 feature extraction module, a feature map super-resolution reconstruction module (FM-SR), and a fusion mechanism based on BiFormer attention (BiFormerAF). Specially, the FM-SR module is designed for the feature map level to recover the important feature in detail, in other words, this module reconstructs the P5 layer feature map into the size of the P3 layer. And then we designed the BiFormerAF module to fuse this reconstruct layer with the P3 layer, which greatly improves the detection performance. The purpose of the BiFormerAF module is to solve the problem of possible loss of feature after reconstruction. Additionally, to validate the performance of our method for "tiny pests" and "very tiny pests" detection in cotton fields, we have developed a large dataset, named Cotton-Yellow-Sticky-2023, which collected pests by yellow sticky traps. Results: Through comprehensive experimental verification, we demonstrate that our proposed framework achieves exceptional performance. Our method achieved 78.2% mAP on the "tiny pests" test result, it surpasses the performance of leading detection models such as YOLOv3, YOLOv5, YOLOv7 and YOLOv8 by 6.9%, 7.2%, 5.7% and 4.1%, respectively. Meanwhile, our results on "very tiny pests" reached 57% mAP, which are 32.2% higher than YOLOv8. To verify the generalizability of the model, our experiments on Yellow Sticky Traps (low-resolution) dataset still maintained the highest 92.8% mAP. Discussion: The above experimental results indicate that our model not only provides help in solving the problem of tiny pests in cotton fields, but also has good generalizability and can be used for the detection of tiny pests in other crops.

12.
J Imaging Inform Med ; 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39147886

ABSTRACT

Accurate segmentation of skin lesions in dermoscopic images is of key importance for quantitative analysis of melanoma. Although existing medical image segmentation methods significantly improve skin lesion segmentation, they still have limitations in extracting local features with global information, do not handle challenging lesions well, and usually have a large number of parameters and high computational complexity. To address these issues, this paper proposes an efficient adaptive attention and convolutional fusion network for skin lesion segmentation (EAAC-Net). We designed two parallel encoders, where the efficient adaptive attention feature extraction module (EAAM) adaptively establishes global spatial dependence and global channel dependence by constructing the adjacency matrix of the directed graph and can adaptively filter out the least relevant tokens at the coarse-grained region level, thus reducing the computational complexity of the self-attention mechanism. The efficient multiscale attention-based convolution module (EMA⋅C) utilizes multiscale attention for cross-space learning of local features extracted from the convolutional layer to enhance the representation of richly detailed local features. In addition, we designed a reverse attention feature fusion module (RAFM) to enhance the effective boundary information gradually. To validate the performance of our proposed network, we compared it with other methods on ISIC 2016, ISIC 2018, and PH2 public datasets, and the experimental results show that EAAC-Net has superior segmentation performance under commonly used evaluation metrics.

13.
Med Phys ; 2024 Aug 13.
Article in English | MEDLINE | ID: mdl-39137295

ABSTRACT

BACKGROUND: Precise glioma segmentation from multi-parametric magnetic resonance (MR) images is essential for brain glioma diagnosis. However, due to the indistinct boundaries between tumor sub-regions and the heterogeneous appearances of gliomas in volumetric MR scans, designing a reliable and automated glioma segmentation method is still challenging. Although existing 3D Transformer-based or convolution-based segmentation networks have obtained promising results via multi-modal feature fusion strategies or contextual learning methods, they widely lack the capability of hierarchical interactions between different modalities and cannot effectively learn comprehensive feature representations related to all glioma sub-regions. PURPOSE: To overcome these problems, in this paper, we propose a 3D hierarchical cross-modality interaction network (HCMINet) using Transformers and convolutions for accurate multi-modal glioma segmentation, which leverages an effective hierarchical cross-modality interaction strategy to sufficiently learn modality-specific and modality-shared knowledge correlated to glioma sub-region segmentation from multi-parametric MR images. METHODS: In the HCMINet, we first design a hierarchical cross-modality interaction Transformer (HCMITrans) encoder to hierarchically encode and fuse heterogeneous multi-modal features by Transformer-based intra-modal embeddings and inter-modal interactions in multiple encoding stages, which effectively captures complex cross-modality correlations while modeling global contexts. Then, we collaborate an HCMITrans encoder with a modality-shared convolutional encoder to construct the dual-encoder architecture in the encoding stage, which can learn the abundant contextual information from global and local perspectives. Finally, in the decoding stage, we present a progressive hybrid context fusion (PHCF) decoder to progressively fuse local and global features extracted by the dual-encoder architecture, which utilizes the local-global context fusion (LGCF) module to efficiently alleviate the contextual discrepancy among the decoding features. RESULTS: Extensive experiments are conducted on two public and competitive glioma benchmark datasets, including the BraTS2020 dataset with 494 patients and the BraTS2021 dataset with 1251 patients. Results show that our proposed method outperforms existing Transformer-based and CNN-based methods using other multi-modal fusion strategies in our experiments. Specifically, the proposed HCMINet achieves state-of-the-art mean DSC values of 85.33% and 91.09% on the BraTS2020 online validation dataset and the BraTS2021 local testing dataset, respectively. CONCLUSIONS: Our proposed method can accurately and automatically segment glioma regions from multi-parametric MR images, which is beneficial for the quantitative analysis of brain gliomas and helpful for reducing the annotation burden of neuroradiologists.

14.
Food Chem ; 460(Pt 3): 140795, 2024 Aug 08.
Article in English | MEDLINE | ID: mdl-39137577

ABSTRACT

Beef is an important food product in human nutrition. The evaluation of the quality and safety of this food product is a matter that needs attention. Non-destructive determination of beef quality by image processing methods shows great potential for food safety, as it helps prevent wastage. Traditionally, beef quality determination by image processing methods has been based on handcrafted color features. It is, however, difficult to determine meat quality based on the color space model alone. This study introduces an effective beef quality classification approach by concatenating learning-based global and handcrafted color features. According to experimental results, the convVGG16 + HLS + HSV + RGB + Bi-LSTM model achieved high performance values. This model's accuracy, precision, recall, F1-score, AUC, Jaccard index, and MCC values were 0.989, 0.990, 0.989, 0.990, 0.992, 0.979, and 0.983, respectively.

15.
Front Neuroinform ; 18: 1403732, 2024.
Article in English | MEDLINE | ID: mdl-39139696

ABSTRACT

Introduction: Brain diseases, particularly the classification of gliomas and brain metastases and the prediction of HT in strokes, pose significant challenges in healthcare. Existing methods, relying predominantly on clinical data or imaging-based techniques such as radiomics, often fall short in achieving satisfactory classification accuracy. These methods fail to adequately capture the nuanced features crucial for accurate diagnosis, often hindered by noise and the inability to integrate information across various scales. Methods: We propose a novel approach that mask attention mechanisms with multi-scale feature fusion for Multimodal brain disease classification tasks, termed M 3, which aims to extract features highly relevant to the disease. The extracted features are then dimensionally reduced using Principal Component Analysis (PCA), followed by classification with a Support Vector Machine (SVM) to obtain the predictive results. Results: Our methodology underwent rigorous testing on multi-parametric MRI datasets for both brain tumors and strokes. The results demonstrate a significant improvement in addressing critical clinical challenges, including the classification of gliomas, brain metastases, and the prediction of hemorrhagic stroke transformations. Ablation studies further validate the effectiveness of our attention mechanism and feature fusion modules. Discussion: These findings underscore the potential of our approach to meet and exceed current clinical diagnostic demands, offering promising prospects for enhancing healthcare outcomes in the diagnosis and treatment of brain diseases.

16.
Front Optoelectron ; 17(1): 28, 2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39141164

ABSTRACT

Restricted by the lighting conditions, the images captured at night tend to suffer from color aberration, noise, and other unfavorable factors, making it difficult for subsequent vision-based applications. To solve this problem, we propose a two-stage size-controllable low-light enhancement method, named Dual Fusion Enhancement Net (DFEN). The whole algorithm is built on a double U-Net structure, implementing brightness adjustment and detail revision respectively. A dual branch feature fusion module is adopted to enhance its ability of feature extraction and aggregation. We also design a learnable regularized attention module to balance the enhancement effect on different regions. Besides, we introduce a cosine training strategy to smooth the transition of the training target from the brightness adjustment stage to the detail revision stage during the training process. The proposed DFEN is tested on several low-light datasets, and the experimental results demonstrate that the algorithm achieves superior enhancement results with the similar parameters. It is worth noting that the lightest DFEN model reaches 11 FPS for image size of 1224×1024 in an RTX 3090 GPU.

17.
Technol Health Care ; 2024 Aug 08.
Article in English | MEDLINE | ID: mdl-39177617

ABSTRACT

BACKGROUND: Brain tumor is an extremely dangerous disease with a very high mortality rate worldwide. Detecting brain tumors accurately is crucial due to the varying appearance of tumor cells and the dimensional irregularities in their growth. This poses a significant challenge for detection algorithms. Currently, there are numerous algorithms utilized for this purpose, ranging from transform-based methods to those rooted in machine learning techniques. These algorithms aim to enhance the accuracy of detection despite the complexities involved in identifying brain tumor cells. The major limitation of these algorithms is the mapping of extracted features of a brain tumor in the classification algorithms. OBJECTIVE: To employ a combination of transform methods to extract texture feature from brain tumor images. METHODS: This paper employs a combination of transform methods based on sub band decomposition for texture feature extraction from MRI scans, hybrid feature optimization methods using firefly and glow-worm algorithms for selection of feature, employment of MKSVM algorithm and stacking ensemble classifier for classification and application of the feature of fusion of different feature extraction methods. RESULTS: The algorithm under consideration has been put into practice using MATLAB, utilizing datasets from BRATS (Brain Tumor Segmentation) for the years 2013, 2015, and 2018. These datasets serve as the foundation for testing and validating the algorithm's performance across different time periods, providing a comprehensive assessment of its effectiveness in detecting brain tumors. The proposed algorithm achieves maximum detection accuracy, detection sensitivity and specificity up to 98%, 99% and 99.5% respectively. The experimental outcomes showcase the efficiency of the algorithm in detection of brain tumor. CONCLUSION: The proposed work mainly contributes in brain tumor detection in the following aspects: a) use of combination of transform methods for texture feature extraction from MRI scans b) hybrid feature selection methods using firefly and glow-worm optimization algorithms for selection of feature c) employment of MKSVM algorithm and stacking ensemble classifier for classification and application of the feature of fusion of different feature extraction methods.

18.
J Alzheimers Dis ; 101(1): 75-89, 2024.
Article in English | MEDLINE | ID: mdl-39177597

ABSTRACT

Background: Alzheimer's disease (AD) is a progressive neurodegenerative disease that is not easily detected in the early stage. Handwriting and walking have been shown to be potential indicators of cognitive decline and are often affected by AD. Objective: This study proposes an assisted screening framework for AD based on multimodal analysis of handwriting and gait and explores whether using a combination of multiple modalities can improve the accuracy of single modality classification. Methods: We recruited 90 participants (38 AD patients and 52 healthy controls). The handwriting data was collected under four handwriting tasks using dot-matrix digital pens, and the gait data was collected using an electronic trail. The two kinds of features were fused as inputs for several different machine learning models (Logistic Regression, SVM, XGBoost, Adaboost, LightGBM), and the model performance was compared. Results: The accuracy of each model ranged from 71.95% to 96.17%. Among them, the model constructed by LightGBM had the best performance, with an accuracy of 96.17%, sensitivity of 95.32%, specificity of 96.78%, PPV of 95.94%, NPV of 96.74%, and AUC of 0.991. However, the highest accuracy of a single modality was 93.53%, which was achieved by XGBoost in gait features. Conclusions: The research results show that the combination of handwriting features and gait features can achieve better classification results than a single modality. In addition, the assisted screening model proposed in this study can achieve effective classification of AD, which has development and application prospects.


Subject(s)
Alzheimer Disease , Gait Analysis , Handwriting , Machine Learning , Humans , Alzheimer Disease/diagnosis , Alzheimer Disease/physiopathology , Male , Female , Aged , Gait Analysis/methods , Aged, 80 and over , Sensitivity and Specificity
19.
Sci Rep ; 14(1): 19976, 2024 08 28.
Article in English | MEDLINE | ID: mdl-39198553

ABSTRACT

The diagnosis of early prostate cancer depends on the accurate segmentation of prostate regions in magnetic resonance imaging (MRI). However, this segmentation task is challenging due to the particularities of prostate MR images themselves and the limitations of existing methods. To address these issues, we propose a U-shaped encoder-decoder network MM-UNet based on Mamba and CNN for prostate segmentation in MR images. Specifically, we first proposed an adaptive feature fusion module based on channel attention guidance to achieve effective fusion between adjacent hierarchical features and suppress the interference of background noise. Secondly, we propose a global context-aware module based on Mamba, which has strong long-range modeling capabilities and linear complexity, to capture global context information in images. Finally, we propose a multi-scale anisotropic convolution module based on the principle of parallel multi-scale anisotropic convolution blocks and 3D convolution decomposition. Experimental results on two public prostate MR image segmentation datasets demonstrate that the proposed method outperforms competing models in terms of prostate segmentation performance and achieves state-of-the-art performance. In future work, we intend to enhance the model's robustness and extend its applicability to additional medical image segmentation tasks.


Subject(s)
Magnetic Resonance Imaging , Prostate , Prostatic Neoplasms , Humans , Male , Magnetic Resonance Imaging/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Prostate/diagnostic imaging , Prostate/pathology , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Algorithms , Image Interpretation, Computer-Assisted/methods
20.
Comput Biol Med ; 180: 109012, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39153394

ABSTRACT

In drug discovery, precisely identifying drug-target interactions is crucial for finding new drugs and understanding drug mechanisms. Evolving drug/target heterogeneous data presents challenges in obtaining multimodal representation in drug-target prediction(DTI). To deal with this, we propose 'ERT-GFAN', a multimodal drug-target interaction prediction model inspired by molecular biology. Firstly, it integrates bio-inspired principles to obtain structure feature of drugs and targets using Extended Connectivity Fingerprints(ECFP). Simultaneously, the knowledge graph embedding model RotatE is employed to discover the interaction feature of drug-target pairs. Subsequently, Transformer is utilized to refine the contextual neighborhood features from the obtained structure feature and interaction features, and multi-modal high-dimensional fusion features of the three-modal information constructed. Finally, the final DTI prediction results are outputted by integrating the multimodal fusion features into a graphical high-dimensional fusion feature attention network (GFAN) using our innovative multimodal high-dimensional fusion feature attention. This multimodal approach offers a comprehensive understanding of drug-target interactions, addressing challenges in complex knowledge graphs. By combining structure feature, interaction feature, and contextual neighborhood features, 'ERT-GFAN' excels in predicting DTI. Empirical evaluations on three datasets demonstrate our method's superior performance, with AUC of 0.9739, 0.9862, and 0.9667, AUPR of 0.9598, 0.9789, and 0.9750, and Mean Reciprocal Rank(MRR) of 0.7386, 0.7035, and 0.7133. Ablation studies show over a 5% improvement in predictive performance compared to baseline unimodal and bimodal models. These results, along with detailed case studies, highlight the efficacy and robustness of our approach.


Subject(s)
Drug Discovery , Humans , Drug Discovery/methods , Computational Biology/methods
SELECTION OF CITATIONS
SEARCH DETAIL