Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.650
Filtrar
1.
Comput Biol Med ; 179: 108734, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38964243

RESUMO

Artificial intelligence (AI) has played a vital role in computer-aided drug design (CADD). This development has been further accelerated with the increasing use of machine learning (ML), mainly deep learning (DL), and computing hardware and software advancements. As a result, initial doubts about the application of AI in drug discovery have been dispelled, leading to significant benefits in medicinal chemistry. At the same time, it is crucial to recognize that AI is still in its infancy and faces a few limitations that need to be addressed to harness its full potential in drug discovery. Some notable limitations are insufficient, unlabeled, and non-uniform data, the resemblance of some AI-generated molecules with existing molecules, unavailability of inadequate benchmarks, intellectual property rights (IPRs) related hurdles in data sharing, poor understanding of biology, focus on proxy data and ligands, lack of holistic methods to represent input (molecular structures) to prevent pre-processing of input molecules (feature engineering), etc. The major component in AI infrastructure is input data, as most of the successes of AI-driven efforts to improve drug discovery depend on the quality and quantity of data, used to train and test AI algorithms, besides a few other factors. Additionally, data-gulping DL approaches, without sufficient data, may collapse to live up to their promise. Current literature suggests a few methods, to certain extent, effectively handle low data for better output from the AI models in the context of drug discovery. These are transferring learning (TL), active learning (AL), single or one-shot learning (OSL), multi-task learning (MTL), data augmentation (DA), data synthesis (DS), etc. One different method, which enables sharing of proprietary data on a common platform (without compromising data privacy) to train ML model, is federated learning (FL). In this review, we compare and discuss these methods, their recent applications, and limitations while modeling small molecule data to get the improved output of AI methods in drug discovery. Article also sums up some other novel methods to handle inadequate data.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38965165

RESUMO

PURPOSE: Cardiac perfusion MRI is vital for disease diagnosis, treatment planning, and risk stratification, with anomalies serving as markers of underlying ischemic pathologies. AI-assisted methods and tools enable accurate and efficient left ventricular (LV) myocardium segmentation on all DCE-MRI timeframes, offering a solution to the challenges posed by the multidimensional nature of the data. This study aims to develop and assess an automated method for LV myocardial segmentation on DCE-MRI data of a local hospital. METHODS: The study consists of retrospective DCE-MRI data from 55 subjects acquired at the local hospital using a 1.5 T MRI scanner. The dataset included subjects with and without cardiac abnormalities. The timepoint for the reference frame (post-contrast LV myocardium) was identified using standard deviation across the temporal sequences. Iterative image registration of other temporal images with respect to this reference image was performed using Maxwell's demons algorithm. The registered stack was fed to the model built using the U-Net framework for predicting the LV myocardium at all timeframes of DCE-MRI. RESULTS: The mean and standard deviation of the dice similarity coefficient (DSC) for myocardial segmentation using pre-trained network Net_cine is 0.78 ± 0.04, and for the fine-tuned network Net_dyn which predicts mask on all timeframes individually, it is 0.78 ± 0.03. The DSC for Net_dyn ranged from 0.71 to 0.93. The average DSC achieved for the reference frame is 0.82 ± 0.06. CONCLUSION: The study proposed a fast and fully automated AI-assisted method to segment LV myocardium on all timeframes of DCE-MRI data. The method is robust, and its performance is independent of the intra-temporal sequence registration and can easily accommodate timeframes with potential registration errors.

3.
Water Res ; 261: 121933, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38972234

RESUMO

Data-driven metamodels reproduce the input-output mapping of physics-based models while significantly reducing simulation times. Such techniques are widely used in the design, control, and optimization of water distribution systems. Recent research highlights the potential of metamodels based on Graph Neural Networks as they efficiently leverage graph-structured characteristics of water distribution systems. Furthermore, these metamodels possess inductive biases that facilitate generalization to unseen topologies. Transferable metamodels are particularly advantageous for problems that require an efficient evaluation of many alternative layouts or when training data is scarce. However, the transferability of metamodels based on GNNs remains limited, due to the lack of representation of physical processes that occur on edge level, i.e. pipes. To address this limitation, our work introduces Edge-Based Graph Neural Networks, which extend the set of inductive biases and represent link-level processes in more detail than traditional Graph Neural Networks. Such an architecture is theoretically related to the constraints of mass conservation at the junctions. To verify our approach, we test the suitability of the edge-based network to estimate pipe flowrates and nodal pressures emulating steady-state EPANET simulations. We first compare the effectiveness of the metamodels on several benchmark water distribution systems against Graph Neural Networks. Then, we explore transferability by evaluating the performance on unseen systems. For each configuration, we calculate model performance metrics, such as coefficient of determination and speed-up with respect to the original numerical model. Our results show that the proposed method captures the pipe-level physical processes more accurately than node-based models. When tested on unseen water networks with a similar distribution of demands, our model retains a good generalization performance with a coefficient of determination of up to 0.98 for flowrates and up to 0.95 for predicted heads. Further developments could include simultaneous derivation of pressures and flowrates.

4.
J Neural Eng ; 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38986468

RESUMO

OBJECTIVE: Electroencephalography (EEG) is widely recognized as an effective method for detecting fatigue. However, practical applications of EEG for fatigue detection in real-world scenarios are often challenging, particularly in cases involving subjects not included in the training datasets, owing to bio-individual differences and noisy labels. This study aims to develop an effective framework for cross-subject fatigue detection by addressing these challenges. APPROACH: In this study, we propose a novel framework, termed DP-MP, for cross-subject fatigue detection, which utilizes a Domain-Adversarial Neural Network (DANN)-based prototypical representation in conjunction with Mix-up pairwise learning. Our proposed DP-MP framework aims to mitigate the impact of bio-individual differences by encoding fatigue-related semantic structures within EEG signals and exploring shared fatigue prototype features across individuals. Notably, to the best of our knowledge, this work is the first to conceptualize fatigue detection as a pairwise learning task, thereby effectively reducing the interference from noisy labels. Furthermore, we propose the Mix-up pairwise learning (MixPa) approach in the field of fatigue detection, which broadens the advantages of pairwise learning by introducing more diverse and informative relationships among samples. RESULTS: Cross-subject experiments were conducted on two benchmark databases, SEED-VIG and FTEF, achieving state-of-the-art performance with average accuracies of 88.14% and 97.41%, respectively. These promising results demonstrate our model's effectiveness and excellent generalization capability. SIGNIFICANCE: This is the first time EEG-based fatigue detection has been conceptualized as a pairwise learning task, offering a novel perspective to this field. Moreover, our proposed DP-MP framework effectively tackles the challenges of bio-individual differences and noisy labels in the fatigue detection field and demonstrates superior performance. Our work provides valuable insights for future research, promoting the application of brain-computer interfaces for fatigue detection in real-world scenarios. .

5.
PeerJ Comput Sci ; 10: e2103, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983199

RESUMO

Images and videos containing fake faces are the most common type of digital manipulation. Such content can lead to negative consequences by spreading false information. The use of machine learning algorithms to produce fake face images has made it challenging to distinguish between genuine and fake content. Face manipulations are categorized into four basic groups: entire face synthesis, face identity manipulation (deepfake), facial attribute manipulation and facial expression manipulation. The study utilized lightweight convolutional neural networks to detect fake face images generated by using entire face synthesis and generative adversarial networks. The dataset used in the training process includes 70,000 real images in the FFHQ dataset and 70,000 fake images produced with StyleGAN2 using the FFHQ dataset. 80% of the dataset was used for training and 20% for testing. Initially, the MobileNet, MobileNetV2, EfficientNetB0, and NASNetMobile convolutional neural networks were trained separately for the training process. In the training, the models were pre-trained on ImageNet and reused with transfer learning. As a result of the first trainings EfficientNetB0 algorithm reached the highest accuracy of 93.64%. The EfficientNetB0 algorithm was revised to increase its accuracy rate by adding two dense layers (256 neurons) with ReLU activation, two dropout layers, one flattening layer, one dense layer (128 neurons) with ReLU activation function, and a softmax activation function used for the classification dense layer with two nodes. As a result of this process accuracy rate of 95.48% was achieved with EfficientNetB0 algorithm. Finally, the model that achieved 95.48% accuracy was used to train MobileNet and MobileNetV2 models together using the stacking ensemble learning method, resulting in the highest accuracy rate of 96.44%.

6.
PeerJ Comput Sci ; 10: e2107, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983235

RESUMO

Fine-tuning is an important technique in transfer learning that has achieved significant success in tasks that lack training data. However, as it is difficult to extract effective features for single-source domain fine-tuning when the data distribution difference between the source and the target domain is large, we propose a transfer learning framework based on multi-source domain called adaptive multi-source domain collaborative fine-tuning (AMCF) to address this issue. AMCF utilizes multiple source domain models for collaborative fine-tuning, thereby improving the feature extraction capability of model in the target task. Specifically, AMCF employs an adaptive multi-source domain layer selection strategy to customize appropriate layer fine-tuning schemes for the target task among multiple source domain models, aiming to extract more efficient features. Furthermore, a novel multi-source domain collaborative loss function is designed to facilitate the precise extraction of target data features by each source domain model. Simultaneously, it works towards minimizing the output difference among various source domain models, thereby enhancing the adaptability of the source domain model to the target data. In order to validate the effectiveness of AMCF, it is applied to seven public visual classification datasets commonly used in transfer learning, and compared with the most widely used single-source domain fine-tuning methods. Experimental results demonstrate that, in comparison with the existing fine-tuning methods, our method not only enhances the accuracy of feature extraction in the model but also provides precise layer fine-tuning schemes for the target task, thereby significantly improving the fine-tuning performance.

7.
Plant Methods ; 20(1): 101, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38970029

RESUMO

BACKGROUND: The occurrence, development, and outbreak of tea diseases and pests pose a significant challenge to the quality and yield of tea, necessitating prompt identification and control measures. Given the vast array of tea diseases and pests, coupled with the intricacies of the tea planting environment, accurate and rapid diagnosis remains elusive. In addressing this issue, the present study investigates the utilization of transfer learning convolution neural networks for the identification of tea diseases and pests. Our objective is to facilitate the accurate and expeditious detection of diseases and pests affecting the Yunnan Big leaf kind of tea within its complex ecological niche. RESULTS: Initially, we gathered 1878 image data encompassing 10 prevalent types of tea diseases and pests from complex environments within tea plantations, compiling a comprehensive dataset. Additionally, we employed data augmentation techniques to enrich the sample diversity. Leveraging the ImageNet pre-trained model, we conducted a comprehensive evaluation and identified the Xception architecture as the most effective model. Notably, the integration of an attention mechanism within the Xeption model did not yield improvements in recognition performance. Subsequently, through transfer learning and the freezing core strategy, we achieved a test accuracy rate of 98.58% and a verification accuracy rate of 98.2310%. CONCLUSIONS: These outcomes signify a significant stride towards accurate and timely detection, holding promise for enhancing the sustainability and productivity of Yunnan tea. Our findings provide a theoretical foundation and technical guidance for the development of online detection technologies for tea diseases and pests in Yunnan.

8.
Artigo em Inglês | MEDLINE | ID: mdl-39029475

RESUMO

BACKGROUND: Glioblastoma Multiforme (GBM) is an aggressive form of malignant brain tumor with a generally poor prognosis. O6-methylguanine-DNA methyltransferase (MGMT) promoter methylation has been shown to be a predictive bio-marker for resistance to treatment of GBM, but it is invasive and time-consuming to determine methylation status. There has been effort to predict the MGMT methylation status through analyzing MRI scans using machine learning, which only requires pre-operative scans that are already part of standard-of-care for GBM patients. Purpose: To improve the performance of conventional transfer learning in the identification of MGMT promoter methylation status, we developed a 3D SpotTune network with adaptive fine-tuning capability. Using the pretrained weights of MedicalNet with the SpotTune network, we compared its performance with a randomly initialized network for different combinations of MR modalities. Methods: Using a ResNet50 as the base network, three categories of networks are created: 1) A 3D SpotTune network to process volumetric MR images, 2) a network with randomly initialized weights, and 3) a network pre-trained on MedicalNet. These three networks are trained and evaluated using a public GBM dataset provided by the University of Pennsylvania. The MRI scans from 240 patients are used, with 11 different modalities corresponding to a set of perfusion, diffusion, and structural scans. The performance is evaluated using 5-fold cross validation with a hold-out testing dataset. Results: The SpotTune network showed better performance than the randomly initialized network. The best performing SpotTune model achieved an area under the Receiver Operating Characteristic curve (AUC), average precision of the precision-recall curve (AP), sensitivity, and specificity values of 0.6604, 0.6179, 0.6667, and 0.6061 respectively. Conclusions: SpotTune enables transfer learning to be adaptive to individual patients, resulting in improved performance in predicting MGMT promoter methylation status in GBM using equivalent MRI modalities as compared to a randomly initialized network.

9.
Sci Rep ; 14(1): 16690, 2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39030206

RESUMO

Exoskeleton-based support for patients requires the learning of individual machine-learning models to recognize movement intentions of patients based on the electroencephalogram (EEG). A major issue in EEG-based movement intention recognition is the long calibration time required to train a model. In this paper, we propose a transfer learning approach that eliminates the need for a calibration session. This approach is validated on healthy subjects in this study. We will use the proposed approach in our future rehabilitation application, where the movement intention of the affected arm of a patient can be inferred from the EEG data recorded during bilateral arm movements enabled by the exoskeleton mirroring arm movements from the unaffected to the affected arm. For the initial evaluation, we compared two trained models for predicting unilateral and bilateral movement intentions without applying a classifier transfer. For the main evaluation, we predicted unilateral movement intentions without a calibration session by transferring the classifier trained on data from bilateral movement intentions. Our results showed that the classification performance for the transfer case was comparable to that in the non-transfer case, even with only 4 or 8 EEG channels. Our results contribute to robotic rehabilitation by eliminating the need for a calibration session, since EEG data for training is recorded during the rehabilitation session, and only a small number of EEG channels are required for model training.


Assuntos
Eletroencefalografia , Exoesqueleto Energizado , Intenção , Movimento , Humanos , Eletroencefalografia/métodos , Masculino , Calibragem , Movimento/fisiologia , Adulto , Aprendizado de Máquina , Feminino , Adulto Jovem
10.
BMC Oral Health ; 24(1): 814, 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39020332

RESUMO

BACKGROUND: To evaluate the performances of several advanced deep convolutional neural network models (AlexNet, VGG, GoogLeNet, ResNet) based on ensemble learning for recognizing chronic gingivitis from screening oral images. METHODS: A total of 683 intraoral clinical images acquired from 134 volunteers were used to construct the database and evaluate the models. Four deep ConvNet models were developed using ensemble learning and outperformed a single model. The performances of the different models were evaluated by comparing the accuracy and sensitivity for recognizing the existence of gingivitis from intraoral images. RESULTS: The ResNet model achieved an area under the curve (AUC) value of 97%, while the AUC values for the GoogLeNet, AlexNet, and VGG models were 94%, 92%, and 89%, respectively. Although the ResNet and GoogLeNet models performed best in classifying gingivitis from images, the sensitivity outcomes were not significantly different among the ResNet, GoogLeNet, and Alexnet models (p>0.05). However, the sensitivity of the VGGNet model differed significantly from those of the other models (p < 0.001). CONCLUSION: The ResNet and GoogLeNet models show promise for identifying chronic gingivitis from images. These models can help doctors diagnose periodontal diseases efficiently or based on self-examination of the oral cavity by patients.


Assuntos
Gengivite , Redes Neurais de Computação , Humanos , Gengivite/diagnóstico , Gengivite/patologia , Doença Crônica , Adulto , Feminino , Fotografia Dentária/métodos , Masculino , Aprendizado Profundo , Fotografação
11.
Sensors (Basel) ; 24(13)2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-39000849

RESUMO

In response to the issues of low model recognition accuracy and weak generalization in mechanical equipment fault diagnosis due to scarce data, this paper proposes an innovative solution, a cross-device secondary transfer-learning method based on EGRUN (efficient gated recurrent unit network). This method utilizes continuous wavelet transform (CWT) to transform source domain data into images. The EGRUN model is initially trained, and shallow layer weights are frozen. Subsequently, random overlapping sampling is applied to the target domain data to enhance data and perform secondary transfer learning. The experimental results demonstrate that this method not only significantly improves the model's ability to learn fault features but also enhances its classification accuracy and generalization performance. Compared to current state-of-the-art algorithms, the model proposed in this study shows faster convergence speed, higher diagnostic accuracy, and superior robustness and generalization, providing an effective approach to address the challenges arising from scarce data and varying operating conditions in practical engineering scenarios.

12.
Sensors (Basel) ; 24(13)2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-39000931

RESUMO

Internet of Things (IoT) applications and resources are highly vulnerable to flood attacks, including Distributed Denial of Service (DDoS) attacks. These attacks overwhelm the targeted device with numerous network packets, making its resources inaccessible to authorized users. Such attacks may comprise attack references, attack types, sub-categories, host information, malicious scripts, etc. These details assist security professionals in identifying weaknesses, tailoring defense measures, and responding rapidly to possible threats, thereby improving the overall security posture of IoT devices. Developing an intelligent Intrusion Detection System (IDS) is highly complex due to its numerous network features. This study presents an improved IDS for IoT security that employs multimodal big data representation and transfer learning. First, the Packet Capture (PCAP) files are crawled to retrieve the necessary attacks and bytes. Second, Spark-based big data optimization algorithms handle huge volumes of data. Second, a transfer learning approach such as word2vec retrieves semantically-based observed features. Third, an algorithm is developed to convert network bytes into images, and texture features are extracted by configuring an attention-based Residual Network (ResNet). Finally, the trained text and texture features are combined and used as multimodal features to classify various attacks. The proposed method is thoroughly evaluated on three widely used IoT-based datasets: CIC-IoT 2022, CIC-IoT 2023, and Edge-IIoT. The proposed method achieves excellent classification performance, with an accuracy of 98.2%. In addition, we present a game theory-based process to validate the proposed approach formally.

13.
Sensors (Basel) ; 24(13)2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-39001200

RESUMO

Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection.


Assuntos
Aprendizado Profundo , Internet das Coisas , Humanos , Leucemia-Linfoma Linfoblástico de Células Precursoras/diagnóstico , Inteligência Artificial , Leucemia/diagnóstico , Leucemia/classificação , Leucemia/patologia , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
14.
Diagnostics (Basel) ; 14(13)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39001292

RESUMO

Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data and subtle variations within and between cancer types. Attention mechanisms provide feature refinement capabilities that have shown promise in overcoming such challenges. To this end, this paper proposes the Efficient Channel Spatial Attention Network (ECSAnet), an architecture built on EfficientNetV2 and augmented with a convolutional block attention module (CBAM) and additional fully connected layers. ECSAnet was fine-tuned using the BreakHis dataset, employing Reinhard stain normalization and image augmentation techniques to minimize overfitting and enhance generalizability. In testing, ECSAnet outperformed AlexNet, DenseNet121, EfficientNetV2-S, InceptionNetV3, ResNet50, and VGG16 in most settings, achieving accuracies of 94.2% at 40×, 92.96% at 100×, 88.41% at 200×, and 89.42% at 400× magnifications. The results highlight the effectiveness of CBAM in improving classification accuracy and the importance of stain normalization for generalizability.

15.
Diagnostics (Basel) ; 14(13)2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-39001328

RESUMO

Identifying patients with left ventricular ejection fraction (EF), either reduced [EF < 40% (rEF)], mid-range [EF 40-50% (mEF)], or preserved [EF > 50% (pEF)], is considered of primary clinical importance. An end-to-end video classification using AutoML in Google Vertex AI was applied to echocardiographic recordings. Datasets balanced by majority undersampling, each corresponding to one out of three possible classifications, were obtained from the Standford EchoNet-Dynamic repository. A train-test split of 75/25 was applied. A binary video classification of rEF vs. not rEF demonstrated good performance (test dataset: ROC AUC score 0.939, accuracy 0.863, sensitivity 0.894, specificity 0.831, positive predicting value 0.842). A second binary classification of not pEF vs. pEF was slightly less performing (test dataset: ROC AUC score 0.917, accuracy 0.829, sensitivity 0.761, specificity 0.891, positive predicting value 0.888). A ternary classification was also explored, and lower performance was observed, mainly for the mEF class. A non-AutoML PyTorch implementation in open access confirmed the feasibility of our approach. With this proof of concept, end-to-end video classification based on transfer learning to categorize EF merits consideration for further evaluation in prospective clinical studies.

16.
J Neural Eng ; 21(4)2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-38968936

RESUMO

Objective.Domain adaptation has been recognized as a potent solution to the challenge of limited training data for electroencephalography (EEG) classification tasks. Existing studies primarily focus on homogeneous environments, however, the heterogeneous properties of EEG data arising from device diversity cannot be overlooked. This motivates the development of heterogeneous domain adaptation methods that can fully exploit the knowledge from an auxiliary heterogeneous domain for EEG classification.Approach.In this article, we propose a novel model named informative representation fusion (IRF) to tackle the problem of unsupervised heterogeneous domain adaptation in the context of EEG data. In IRF, we consider different perspectives of data, i.e. independent identically distributed (iid) and non-iid, to learn different representations. Specifically, from the non-iid perspective, IRF models high-order correlations among data by hypergraphs and develops hypergraph encoders to obtain data representations of each domain. From the non-iid perspective, by applying multi-layer perceptron networks to the source and target domain data, we achieve another type of representation for both domains. Subsequently, an attention mechanism is used to fuse these two types of representations to yield informative features. To learn transferable representations, the maximum mean discrepancy is utilized to align the distributions of the source and target domains based on the fused features.Main results.Experimental results on several real-world datasets demonstrate the effectiveness of the proposed model.Significance.This article handles an EEG classification situation where the source and target EEG data lie in different spaces, and what's more, under an unsupervised learning setting. This situation is practical in the real world but barely studied in the literature. The proposed model achieves high classification accuracy, and this study is important for the commercial applications of EEG-based BCIs.


Assuntos
Eletroencefalografia , Eletroencefalografia/métodos , Eletroencefalografia/classificação , Humanos , Aprendizado de Máquina não Supervisionado , Algoritmos , Redes Neurais de Computação
17.
Materials (Basel) ; 17(13)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38998131

RESUMO

Establishing accurate structure-property linkages and precise phase volume accuracy in 3D microstructure reconstruction of materials remains challenging, particularly with limited samples. This paper presents an optimized method for reconstructing 3D microstructures of various materials, including isotropic and anisotropic types with two and three phases, using convolutional occupancy networks and point clouds from inner layers of the microstructure. The method emphasizes precise phase representation and compatibility with point cloud data. A stage within the Quality of Connection Function (QCF) repetition loop optimizes the weights of the convolutional occupancy networks model to minimize error between the microstructure's statistical properties and the reconstructive model. This model successfully reconstructs 3D representations from initial 2D serial images. Comparisons with screened Poisson surface reconstruction and local implicit grid methods demonstrate the model's efficacy. The developed model proves suitable for high-quality 3D microstructure reconstruction, aiding in structure-property linkages and finite element analysis.

18.
Front Neurosci ; 18: 1387196, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39015378

RESUMO

Abnormal ß-amyloid (Aß) accumulation in the brain is an early indicator of Alzheimer's disease (AD) and is typically assessed through invasive procedures such as PET (positron emission tomography) or CSF (cerebrospinal fluid) assays. As new anti-Alzheimer's treatments can now successfully target amyloid pathology, there is a growing interest in predicting Aß positivity (Aß+) from less invasive, more widely available types of brain scans, such as T1-weighted (T1w) MRI. Here we compare multiple approaches to infer Aß + from standard anatomical MRI: (1) classical machine learning algorithms, including logistic regression, XGBoost, and shallow artificial neural networks, (2) deep learning models based on 2D and 3D convolutional neural networks (CNNs), (3) a hybrid ANN-CNN, combining the strengths of shallow and deep neural networks, (4) transfer learning models based on CNNs, and (5) 3D Vision Transformers. All models were trained on paired MRI/PET data from 1,847 elderly participants (mean age: 75.1 yrs. ± 7.6SD; 863 females/984 males; 661 healthy controls, 889 with mild cognitive impairment (MCI), and 297 with Dementia), scanned as part of the Alzheimer's Disease Neuroimaging Initiative. We evaluated each model's balanced accuracy and F1 scores. While further tests on more diverse data are warranted, deep learning models trained on standard MRI showed promise for estimating Aß + status, at least in people with MCI. This may offer a potential screening option before resorting to more invasive procedures.

19.
J Med Imaging (Bellingham) ; 11(4): 044502, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38988991

RESUMO

Purpose: Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs. Approach: In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing. Results: The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a p -value of 0.0054 for precision and a p -value of 0.00034 for specificity. Conclusions: In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, reduce overdiagnosis and follow-ups due to misdiagnosis in LDCTs, start treatment options in the affected patients, and lower the mortality rate.

20.
J Cheminform ; 16(1): 79, 2024 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-38972994

RESUMO

BACKGROUND: Previous deep learning methods for predicting protein binding pockets mainly employed 3D convolution, yet an abundance of convolution operations may lead the model to excessively prioritize local information, thus overlooking global information. Moreover, it is essential for us to account for the influence of diverse protein folding structural classes. Because proteins classified differently structurally exhibit varying biological functions, whereas those within the same structural class share similar functional attributes. RESULTS: We proposed LVPocket, a novel method that synergistically captures both local and global information of protein structure through the integration of Transformer encoders, which help the model achieve better performance in binding pockets prediction. And then we tailored prediction models for data of four distinct structural classes of proteins using the transfer learning. The four fine-tuned models were trained on the baseline LVPocket model which was trained on the sc-PDB dataset. LVPocket exhibits superior performance on three independent datasets compared to current state-of-the-art methods. Additionally, the fine-tuned model outperforms the baseline model in terms of performance. SCIENTIFIC CONTRIBUTION: We present a novel model structure for predicting protein binding pockets that provides a solution for relying on extensive convolutional computation while neglecting global information about protein structures. Furthermore, we tackle the impact of different protein folding structures on binding pocket prediction tasks through the application of transfer learning methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA