Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.631
Filtrar
1.
PeerJ Comput Sci ; 10: e2103, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983199

RESUMO

Images and videos containing fake faces are the most common type of digital manipulation. Such content can lead to negative consequences by spreading false information. The use of machine learning algorithms to produce fake face images has made it challenging to distinguish between genuine and fake content. Face manipulations are categorized into four basic groups: entire face synthesis, face identity manipulation (deepfake), facial attribute manipulation and facial expression manipulation. The study utilized lightweight convolutional neural networks to detect fake face images generated by using entire face synthesis and generative adversarial networks. The dataset used in the training process includes 70,000 real images in the FFHQ dataset and 70,000 fake images produced with StyleGAN2 using the FFHQ dataset. 80% of the dataset was used for training and 20% for testing. Initially, the MobileNet, MobileNetV2, EfficientNetB0, and NASNetMobile convolutional neural networks were trained separately for the training process. In the training, the models were pre-trained on ImageNet and reused with transfer learning. As a result of the first trainings EfficientNetB0 algorithm reached the highest accuracy of 93.64%. The EfficientNetB0 algorithm was revised to increase its accuracy rate by adding two dense layers (256 neurons) with ReLU activation, two dropout layers, one flattening layer, one dense layer (128 neurons) with ReLU activation function, and a softmax activation function used for the classification dense layer with two nodes. As a result of this process accuracy rate of 95.48% was achieved with EfficientNetB0 algorithm. Finally, the model that achieved 95.48% accuracy was used to train MobileNet and MobileNetV2 models together using the stacking ensemble learning method, resulting in the highest accuracy rate of 96.44%.

2.
PeerJ Comput Sci ; 10: e2107, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983235

RESUMO

Fine-tuning is an important technique in transfer learning that has achieved significant success in tasks that lack training data. However, as it is difficult to extract effective features for single-source domain fine-tuning when the data distribution difference between the source and the target domain is large, we propose a transfer learning framework based on multi-source domain called adaptive multi-source domain collaborative fine-tuning (AMCF) to address this issue. AMCF utilizes multiple source domain models for collaborative fine-tuning, thereby improving the feature extraction capability of model in the target task. Specifically, AMCF employs an adaptive multi-source domain layer selection strategy to customize appropriate layer fine-tuning schemes for the target task among multiple source domain models, aiming to extract more efficient features. Furthermore, a novel multi-source domain collaborative loss function is designed to facilitate the precise extraction of target data features by each source domain model. Simultaneously, it works towards minimizing the output difference among various source domain models, thereby enhancing the adaptability of the source domain model to the target data. In order to validate the effectiveness of AMCF, it is applied to seven public visual classification datasets commonly used in transfer learning, and compared with the most widely used single-source domain fine-tuning methods. Experimental results demonstrate that, in comparison with the existing fine-tuning methods, our method not only enhances the accuracy of feature extraction in the model but also provides precise layer fine-tuning schemes for the target task, thereby significantly improving the fine-tuning performance.

3.
Front Plant Sci ; 15: 1409194, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38966142

RESUMO

Introduction: Cotton yield estimation is crucial in the agricultural process, where the accuracy of boll detection during the flocculation period significantly influences yield estimations in cotton fields. Unmanned Aerial Vehicles (UAVs) are frequently employed for plant detection and counting due to their cost-effectiveness and adaptability. Methods: Addressing the challenges of small target cotton bolls and low resolution of UAVs, this paper introduces a method based on the YOLO v8 framework for transfer learning, named YOLO small-scale pyramid depth-aware detection (SSPD). The method combines space-to-depth and non-strided convolution (SPD-Conv) and a small target detector head, and also integrates a simple, parameter-free attentional mechanism (SimAM) that significantly improves target boll detection accuracy. Results: The YOLO SSPD achieved a boll detection accuracy of 0.874 on UAV-scale imagery. It also recorded a coefficient of determination (R2) of 0.86, with a root mean square error (RMSE) of 12.38 and a relative root mean square error (RRMSE) of 11.19% for boll counts. Discussion: The findings indicate that YOLO SSPD can significantly improve the accuracy of cotton boll detection on UAV imagery, thereby supporting the cotton production process. This method offers a robust solution for high-precision cotton monitoring, enhancing the reliability of cotton yield estimates.

4.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38990514

RESUMO

Protein-peptide interactions (PPepIs) are vital to understanding cellular functions, which can facilitate the design of novel drugs. As an essential component in forming a PPepI, protein-peptide binding sites are the basis for understanding the mechanisms involved in PPepIs. Therefore, accurately identifying protein-peptide binding sites becomes a critical task. The traditional experimental methods for researching these binding sites are labor-intensive and time-consuming, and some computational tools have been invented to supplement it. However, these computational tools have limitations in generality or accuracy due to the need for ligand information, complex feature construction, or their reliance on modeling based on amino acid residues. To deal with the drawbacks of these computational algorithms, we describe a geometric attention-based network for peptide binding site identification (GAPS) in this work. The proposed model utilizes geometric feature engineering to construct atom representations and incorporates multiple attention mechanisms to update relevant biological features. In addition, the transfer learning strategy is implemented for leveraging the protein-protein binding sites information to enhance the protein-peptide binding sites recognition capability, taking into account the common structure and biological bias between proteins and peptides. Consequently, GAPS demonstrates the state-of-the-art performance and excellent robustness in this task. Moreover, our model exhibits exceptional performance across several extended experiments including predicting the apo protein-peptide, protein-cyclic peptide and the AlphaFold-predicted protein-peptide binding sites. These results confirm that the GAPS model is a powerful, versatile, stable method suitable for diverse binding site predictions.


Assuntos
Peptídeos , Sítios de Ligação , Peptídeos/química , Peptídeos/metabolismo , Ligação Proteica , Biologia Computacional/métodos , Algoritmos , Proteínas/química , Proteínas/metabolismo , Aprendizado de Máquina
5.
Water Res ; 261: 121933, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38972234

RESUMO

Data-driven metamodels reproduce the input-output mapping of physics-based models while significantly reducing simulation times. Such techniques are widely used in the design, control, and optimization of water distribution systems. Recent research highlights the potential of metamodels based on Graph Neural Networks as they efficiently leverage graph-structured characteristics of water distribution systems. Furthermore, these metamodels possess inductive biases that facilitate generalization to unseen topologies. Transferable metamodels are particularly advantageous for problems that require an efficient evaluation of many alternative layouts or when training data is scarce. However, the transferability of metamodels based on GNNs remains limited, due to the lack of representation of physical processes that occur on edge level, i.e. pipes. To address this limitation, our work introduces Edge-Based Graph Neural Networks, which extend the set of inductive biases and represent link-level processes in more detail than traditional Graph Neural Networks. Such an architecture is theoretically related to the constraints of mass conservation at the junctions. To verify our approach, we test the suitability of the edge-based network to estimate pipe flowrates and nodal pressures emulating steady-state EPANET simulations. We first compare the effectiveness of the metamodels on several benchmark water distribution systems against Graph Neural Networks. Then, we explore transferability by evaluating the performance on unseen systems. For each configuration, we calculate model performance metrics, such as coefficient of determination and speed-up with respect to the original numerical model. Our results show that the proposed method captures the pipe-level physical processes more accurately than node-based models. When tested on unseen water networks with a similar distribution of demands, our model retains a good generalization performance with a coefficient of determination of up to 0.98 for flowrates and up to 0.95 for predicted heads. Further developments could include simultaneous derivation of pressures and flowrates.

6.
Sensors (Basel) ; 24(13)2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-39000849

RESUMO

In response to the issues of low model recognition accuracy and weak generalization in mechanical equipment fault diagnosis due to scarce data, this paper proposes an innovative solution, a cross-device secondary transfer-learning method based on EGRUN (efficient gated recurrent unit network). This method utilizes continuous wavelet transform (CWT) to transform source domain data into images. The EGRUN model is initially trained, and shallow layer weights are frozen. Subsequently, random overlapping sampling is applied to the target domain data to enhance data and perform secondary transfer learning. The experimental results demonstrate that this method not only significantly improves the model's ability to learn fault features but also enhances its classification accuracy and generalization performance. Compared to current state-of-the-art algorithms, the model proposed in this study shows faster convergence speed, higher diagnostic accuracy, and superior robustness and generalization, providing an effective approach to address the challenges arising from scarce data and varying operating conditions in practical engineering scenarios.

7.
Sensors (Basel) ; 24(13)2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-39000931

RESUMO

Internet of Things (IoT) applications and resources are highly vulnerable to flood attacks, including Distributed Denial of Service (DDoS) attacks. These attacks overwhelm the targeted device with numerous network packets, making its resources inaccessible to authorized users. Such attacks may comprise attack references, attack types, sub-categories, host information, malicious scripts, etc. These details assist security professionals in identifying weaknesses, tailoring defense measures, and responding rapidly to possible threats, thereby improving the overall security posture of IoT devices. Developing an intelligent Intrusion Detection System (IDS) is highly complex due to its numerous network features. This study presents an improved IDS for IoT security that employs multimodal big data representation and transfer learning. First, the Packet Capture (PCAP) files are crawled to retrieve the necessary attacks and bytes. Second, Spark-based big data optimization algorithms handle huge volumes of data. Second, a transfer learning approach such as word2vec retrieves semantically-based observed features. Third, an algorithm is developed to convert network bytes into images, and texture features are extracted by configuring an attention-based Residual Network (ResNet). Finally, the trained text and texture features are combined and used as multimodal features to classify various attacks. The proposed method is thoroughly evaluated on three widely used IoT-based datasets: CIC-IoT 2022, CIC-IoT 2023, and Edge-IIoT. The proposed method achieves excellent classification performance, with an accuracy of 98.2%. In addition, we present a game theory-based process to validate the proposed approach formally.

8.
Sensors (Basel) ; 24(13)2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-39001200

RESUMO

Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection.


Assuntos
Aprendizado Profundo , Internet das Coisas , Humanos , Leucemia-Linfoma Linfoblástico de Células Precursoras/diagnóstico , Inteligência Artificial , Leucemia/diagnóstico , Leucemia/classificação , Leucemia/patologia , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
9.
Diagnostics (Basel) ; 14(13)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39001292

RESUMO

Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data and subtle variations within and between cancer types. Attention mechanisms provide feature refinement capabilities that have shown promise in overcoming such challenges. To this end, this paper proposes the Efficient Channel Spatial Attention Network (ECSAnet), an architecture built on EfficientNetV2 and augmented with a convolutional block attention module (CBAM) and additional fully connected layers. ECSAnet was fine-tuned using the BreakHis dataset, employing Reinhard stain normalization and image augmentation techniques to minimize overfitting and enhance generalizability. In testing, ECSAnet outperformed AlexNet, DenseNet121, EfficientNetV2-S, InceptionNetV3, ResNet50, and VGG16 in most settings, achieving accuracies of 94.2% at 40×, 92.96% at 100×, 88.41% at 200×, and 89.42% at 400× magnifications. The results highlight the effectiveness of CBAM in improving classification accuracy and the importance of stain normalization for generalizability.

10.
Diagnostics (Basel) ; 14(13)2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-39001328

RESUMO

Identifying patients with left ventricular ejection fraction (EF), either reduced [EF < 40% (rEF)], mid-range [EF 40-50% (mEF)], or preserved [EF > 50% (pEF)], is considered of primary clinical importance. An end-to-end video classification using AutoML in Google Vertex AI was applied to echocardiographic recordings. Datasets balanced by majority undersampling, each corresponding to one out of three possible classifications, were obtained from the Standford EchoNet-Dynamic repository. A train-test split of 75/25 was applied. A binary video classification of rEF vs. not rEF demonstrated good performance (test dataset: ROC AUC score 0.939, accuracy 0.863, sensitivity 0.894, specificity 0.831, positive predicting value 0.842). A second binary classification of not pEF vs. pEF was slightly less performing (test dataset: ROC AUC score 0.917, accuracy 0.829, sensitivity 0.761, specificity 0.891, positive predicting value 0.888). A ternary classification was also explored, and lower performance was observed, mainly for the mEF class. A non-AutoML PyTorch implementation in open access confirmed the feasibility of our approach. With this proof of concept, end-to-end video classification based on transfer learning to categorize EF merits consideration for further evaluation in prospective clinical studies.

11.
J Neural Eng ; 21(4)2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-38968936

RESUMO

Objective.Domain adaptation has been recognized as a potent solution to the challenge of limited training data for electroencephalography (EEG) classification tasks. Existing studies primarily focus on homogeneous environments, however, the heterogeneous properties of EEG data arising from device diversity cannot be overlooked. This motivates the development of heterogeneous domain adaptation methods that can fully exploit the knowledge from an auxiliary heterogeneous domain for EEG classification.Approach.In this article, we propose a novel model named informative representation fusion (IRF) to tackle the problem of unsupervised heterogeneous domain adaptation in the context of EEG data. In IRF, we consider different perspectives of data, i.e. independent identically distributed (iid) and non-iid, to learn different representations. Specifically, from the non-iid perspective, IRF models high-order correlations among data by hypergraphs and develops hypergraph encoders to obtain data representations of each domain. From the non-iid perspective, by applying multi-layer perceptron networks to the source and target domain data, we achieve another type of representation for both domains. Subsequently, an attention mechanism is used to fuse these two types of representations to yield informative features. To learn transferable representations, the maximum mean discrepancy is utilized to align the distributions of the source and target domains based on the fused features.Main results.Experimental results on several real-world datasets demonstrate the effectiveness of the proposed model.Significance.This article handles an EEG classification situation where the source and target EEG data lie in different spaces, and what's more, under an unsupervised learning setting. This situation is practical in the real world but barely studied in the literature. The proposed model achieves high classification accuracy, and this study is important for the commercial applications of EEG-based BCIs.


Assuntos
Eletroencefalografia , Eletroencefalografia/métodos , Eletroencefalografia/classificação , Humanos , Aprendizado de Máquina não Supervisionado , Algoritmos , Redes Neurais de Computação
12.
Diagn Cytopathol ; 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39007486

RESUMO

INTRODUCTION: Cytological analysis of effusion specimens provides critical information regarding the diagnosis and staging of malignancies, thus guiding their treatment and subsequent monitoring. Keeping in view the challenges encountered in the morphological interpretation, we explored convolutional neural networks (CNNs) as an important tool for the cytological diagnosis of malignant effusions. MATERIALS AND METHODS: A retrospective review of patients at our institute, over 3.5 years yielded a dataset of 342 effusion samples and 518 images with known diagnoses. Cytological examination and cell block preparation were performed to establish correlation with the gold standard, histopathology. We developed a deep learning model using PyTorch, fine-tuned it on a labelled dataset, and evaluated its diagnostic performance using test samples. RESULTS: The model exhibited encouraging results in the distinction of benign and malignant effusions with area under curve (AUC) of 0.8674, F-measure or F1 score which denotes the harmonic mean of precision and recall, to be 0.8678 thus, demonstrating optimal accuracy of our CNN model. CONCLUSION: The study highlights the promising potential of transfer learning in enhancing the clinical pathology laboratory efficiency when dealing with malignant effusions.

13.
Plant Methods ; 20(1): 104, 2024 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-39004764

RESUMO

BACKGROUND: Agriculture is one of the most crucial assets of any country, as it brings prosperity by alleviating poverty, food shortages, unemployment, and economic instability. The entire process of agriculture comprises many sectors, such as crop cultivation, water irrigation, the supply chain, and many more. During the cultivation process, the plant is exposed to many challenges, among which pesticide attacks and disease in the plant are the main threats. Diseases affect yield production, which affects the country's economy. Over the past decade, there have been significant advancements in agriculture; nevertheless, a substantial portion of crop yields continues to be compromised by diseases and pests. Early detection and prevention are crucial for successful crop management. METHODS: To address this, we propose a framework that utilizes state-of-the-art computer vision (CV) and artificial intelligence (AI) techniques, specifically deep learning (DL), for detecting healthy and unhealthy cotton plants. Our approach combines DL with feature extraction methods such as continuous wavelet transform (CWT) and fast Fourier transform (FFT). The detection process involved employing pre-trained models such as AlexNet, GoogLeNet, InceptionV3, and VGG-19. Implemented models performance was analysed based on metrics such as accuracy, precision, recall, F1-Score, and Confusion matrices. Moreover, the proposed framework employed ensemble learning framework which uses averaging method to fuse the classification score of individual DL model, thereby improving the overall classification accuracy. RESULTS: During the training process, the framework achieved better performance when features extracted from CWT were used as inputs to the DL model compared to features extracted from FFT. Among the learning models, GoogleNet obtained a remarkable accuracy of 93.4% and a notable F1-score of 0.953 when trained on features extracted by CWT in comparison to FFT-extracted features. It was closely followed by AlexNet and InceptionV3 with an accuracy of 93.4% and 91.8% respectively. To further improve the classification accuracy, ensemble learning framework achieved 98.4% on the features extracted from CWT as compared to feature extracted from FFT. CONCLUSION: The results show that the features extracted as scalograms more accurately detect each plant condition using DL models, facilitating the early detection of diseases in cotton plants. This early detection leads to better yield and profit which positively affects the economy.

14.
Plant Methods ; 20(1): 101, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38970029

RESUMO

BACKGROUND: The occurrence, development, and outbreak of tea diseases and pests pose a significant challenge to the quality and yield of tea, necessitating prompt identification and control measures. Given the vast array of tea diseases and pests, coupled with the intricacies of the tea planting environment, accurate and rapid diagnosis remains elusive. In addressing this issue, the present study investigates the utilization of transfer learning convolution neural networks for the identification of tea diseases and pests. Our objective is to facilitate the accurate and expeditious detection of diseases and pests affecting the Yunnan Big leaf kind of tea within its complex ecological niche. RESULTS: Initially, we gathered 1878 image data encompassing 10 prevalent types of tea diseases and pests from complex environments within tea plantations, compiling a comprehensive dataset. Additionally, we employed data augmentation techniques to enrich the sample diversity. Leveraging the ImageNet pre-trained model, we conducted a comprehensive evaluation and identified the Xception architecture as the most effective model. Notably, the integration of an attention mechanism within the Xeption model did not yield improvements in recognition performance. Subsequently, through transfer learning and the freezing core strategy, we achieved a test accuracy rate of 98.58% and a verification accuracy rate of 98.2310%. CONCLUSIONS: These outcomes signify a significant stride towards accurate and timely detection, holding promise for enhancing the sustainability and productivity of Yunnan tea. Our findings provide a theoretical foundation and technical guidance for the development of online detection technologies for tea diseases and pests in Yunnan.

15.
Comput Biol Med ; 179: 108874, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39013343

RESUMO

Smart healthcare has advanced the medical industry with the integration of data-driven approaches. Artificial intelligence and machine learning provided remarkable progress, but there is a lack of transparency and interpretability in such applications. To overcome such limitations, explainable AI (EXAI) provided a promising result. This paper applied the EXAI for disease diagnosis in the advancement of smart healthcare. The paper combined the approach of transfer learning, vision transformer, and explainable AI and designed an ensemble approach for prediction of disease and its severity. The result is evaluated on a dataset of Alzheimer's disease. The result analysis compared the performance of transfer learning models with the ensemble model of transfer learning and vision transformer. For training, InceptionV3, VGG19, Resnet50, and Densenet121 transfer learning models were selected for ensembling with vision transformer. The result compares the performance of two models: a transfer learning (TL) model and an ensemble transfer learning (Ensemble TL) model combined with vision transformer (ViT) on ADNI dataset. For the TL model, the accuracy is 58 %, precision is 52 %, recall is 42 %, and the F1-score is 44 %. Whereas, the Ensemble TL model with ViT shows significantly improved performance i.e., 96 % of accuracy, 94 % of precision, 90 % of recall and 92 % of F1-score on ADNI dataset. This shows the efficacy of the ensemble model over transfer learning models.

16.
Comput Biol Med ; 179: 108734, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38964243

RESUMO

Artificial intelligence (AI) has played a vital role in computer-aided drug design (CADD). This development has been further accelerated with the increasing use of machine learning (ML), mainly deep learning (DL), and computing hardware and software advancements. As a result, initial doubts about the application of AI in drug discovery have been dispelled, leading to significant benefits in medicinal chemistry. At the same time, it is crucial to recognize that AI is still in its infancy and faces a few limitations that need to be addressed to harness its full potential in drug discovery. Some notable limitations are insufficient, unlabeled, and non-uniform data, the resemblance of some AI-generated molecules with existing molecules, unavailability of inadequate benchmarks, intellectual property rights (IPRs) related hurdles in data sharing, poor understanding of biology, focus on proxy data and ligands, lack of holistic methods to represent input (molecular structures) to prevent pre-processing of input molecules (feature engineering), etc. The major component in AI infrastructure is input data, as most of the successes of AI-driven efforts to improve drug discovery depend on the quality and quantity of data, used to train and test AI algorithms, besides a few other factors. Additionally, data-gulping DL approaches, without sufficient data, may collapse to live up to their promise. Current literature suggests a few methods, to certain extent, effectively handle low data for better output from the AI models in the context of drug discovery. These are transferring learning (TL), active learning (AL), single or one-shot learning (OSL), multi-task learning (MTL), data augmentation (DA), data synthesis (DS), etc. One different method, which enables sharing of proprietary data on a common platform (without compromising data privacy) to train ML model, is federated learning (FL). In this review, we compare and discuss these methods, their recent applications, and limitations while modeling small molecule data to get the improved output of AI methods in drug discovery. Article also sums up some other novel methods to handle inadequate data.

17.
Artigo em Inglês | MEDLINE | ID: mdl-38965165

RESUMO

PURPOSE: Cardiac perfusion MRI is vital for disease diagnosis, treatment planning, and risk stratification, with anomalies serving as markers of underlying ischemic pathologies. AI-assisted methods and tools enable accurate and efficient left ventricular (LV) myocardium segmentation on all DCE-MRI timeframes, offering a solution to the challenges posed by the multidimensional nature of the data. This study aims to develop and assess an automated method for LV myocardial segmentation on DCE-MRI data of a local hospital. METHODS: The study consists of retrospective DCE-MRI data from 55 subjects acquired at the local hospital using a 1.5 T MRI scanner. The dataset included subjects with and without cardiac abnormalities. The timepoint for the reference frame (post-contrast LV myocardium) was identified using standard deviation across the temporal sequences. Iterative image registration of other temporal images with respect to this reference image was performed using Maxwell's demons algorithm. The registered stack was fed to the model built using the U-Net framework for predicting the LV myocardium at all timeframes of DCE-MRI. RESULTS: The mean and standard deviation of the dice similarity coefficient (DSC) for myocardial segmentation using pre-trained network Net_cine is 0.78 ± 0.04, and for the fine-tuned network Net_dyn which predicts mask on all timeframes individually, it is 0.78 ± 0.03. The DSC for Net_dyn ranged from 0.71 to 0.93. The average DSC achieved for the reference frame is 0.82 ± 0.06. CONCLUSION: The study proposed a fast and fully automated AI-assisted method to segment LV myocardium on all timeframes of DCE-MRI data. The method is robust, and its performance is independent of the intra-temporal sequence registration and can easily accommodate timeframes with potential registration errors.

18.
J Neural Eng ; 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38986468

RESUMO

OBJECTIVE: Electroencephalography (EEG) is widely recognized as an effective method for detecting fatigue. However, practical applications of EEG for fatigue detection in real-world scenarios are often challenging, particularly in cases involving subjects not included in the training datasets, owing to bio-individual differences and noisy labels. This study aims to develop an effective framework for cross-subject fatigue detection by addressing these challenges. APPROACH: In this study, we propose a novel framework, termed DP-MP, for cross-subject fatigue detection, which utilizes a Domain-Adversarial Neural Network (DANN)-based prototypical representation in conjunction with Mix-up pairwise learning. Our proposed DP-MP framework aims to mitigate the impact of bio-individual differences by encoding fatigue-related semantic structures within EEG signals and exploring shared fatigue prototype features across individuals. Notably, to the best of our knowledge, this work is the first to conceptualize fatigue detection as a pairwise learning task, thereby effectively reducing the interference from noisy labels. Furthermore, we propose the Mix-up pairwise learning (MixPa) approach in the field of fatigue detection, which broadens the advantages of pairwise learning by introducing more diverse and informative relationships among samples. RESULTS: Cross-subject experiments were conducted on two benchmark databases, SEED-VIG and FTEF, achieving state-of-the-art performance with average accuracies of 88.14% and 97.41%, respectively. These promising results demonstrate our model's effectiveness and excellent generalization capability. SIGNIFICANCE: This is the first time EEG-based fatigue detection has been conceptualized as a pairwise learning task, offering a novel perspective to this field. Moreover, our proposed DP-MP framework effectively tackles the challenges of bio-individual differences and noisy labels in the fatigue detection field and demonstrates superior performance. Our work provides valuable insights for future research, promoting the application of brain-computer interfaces for fatigue detection in real-world scenarios. .

19.
J Med Imaging (Bellingham) ; 11(4): 044502, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38988991

RESUMO

Purpose: Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs. Approach: In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing. Results: The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a p -value of 0.0054 for precision and a p -value of 0.00034 for specificity. Conclusions: In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, reduce overdiagnosis and follow-ups due to misdiagnosis in LDCTs, start treatment options in the affected patients, and lower the mortality rate.

20.
J Cheminform ; 16(1): 79, 2024 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-38972994

RESUMO

BACKGROUND: Previous deep learning methods for predicting protein binding pockets mainly employed 3D convolution, yet an abundance of convolution operations may lead the model to excessively prioritize local information, thus overlooking global information. Moreover, it is essential for us to account for the influence of diverse protein folding structural classes. Because proteins classified differently structurally exhibit varying biological functions, whereas those within the same structural class share similar functional attributes. RESULTS: We proposed LVPocket, a novel method that synergistically captures both local and global information of protein structure through the integration of Transformer encoders, which help the model achieve better performance in binding pockets prediction. And then we tailored prediction models for data of four distinct structural classes of proteins using the transfer learning. The four fine-tuned models were trained on the baseline LVPocket model which was trained on the sc-PDB dataset. LVPocket exhibits superior performance on three independent datasets compared to current state-of-the-art methods. Additionally, the fine-tuned model outperforms the baseline model in terms of performance. SCIENTIFIC CONTRIBUTION: We present a novel model structure for predicting protein binding pockets that provides a solution for relying on extensive convolutional computation while neglecting global information about protein structures. Furthermore, we tackle the impact of different protein folding structures on binding pocket prediction tasks through the application of transfer learning methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA