Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 963
Filtrer
1.
BMC Pregnancy Childbirth ; 24(1): 628, 2024 Oct 01.
Article de Anglais | MEDLINE | ID: mdl-39354367

RÉSUMÉ

OBJECTIVE: This study introduces the complete blood count (CBC), a standard prenatal screening test, as a biomarker for diagnosing preeclampsia with severe features (sPE), employing machine learning models. METHODS: We used a boosting machine learning model fed with synthetic data generated through a new methodology called DAS (Data Augmentation and Smoothing). Using data from a Brazilian study including 132 pregnant women, we generated 3,552 synthetic samples for model training. To improve interpretability, we also provided a ridge regression model. RESULTS: Our boosting model obtained an AUROC of 0.90±0.10, sensitivity of 0.95, and specificity of 0.79 to differentiate sPE and non-PE pregnant women, using CBC parameters of neutrophils count, mean corpuscular hemoglobin (MCH), and the aggregate index of systemic inflammation (AISI). In addition, we provided a ridge regression equation using the same three CBC parameters, which is fully interpretable and achieved an AUROC of 0.79±0.10 to differentiate the both groups. Moreover, we also showed that a monocyte count lower than 490 / m m 3 yielded a sensitivity of 0.71 and specificity of 0.72. CONCLUSION: Our study showed that ML-powered CBC could be used as a biomarker for sPE diagnosis support. In addition, we showed that a low monocyte count alone could be an indicator of sPE. SIGNIFICANCE: Although preeclampsia has been extensively studied, no laboratory biomarker with favorable cost-effectiveness has been proposed. Using artificial intelligence, we proposed to use the CBC, a low-cost, fast, and well-spread blood test, as a biomarker for sPE.


Sujet(s)
Marqueurs biologiques , Apprentissage machine , Pré-éclampsie , Humains , Pré-éclampsie/diagnostic , Pré-éclampsie/sang , Femelle , Grossesse , Marqueurs biologiques/sang , Hémogramme/méthodes , Adulte , Sensibilité et spécificité , Brésil , Indice de gravité de la maladie , Courbe ROC , Diagnostic prénatal/méthodes
2.
Comput Biol Chem ; 113: 108231, 2024 Sep 30.
Article de Anglais | MEDLINE | ID: mdl-39362115

RÉSUMÉ

BACKGROUND: Crohn's disease is a complex genetic disease that involves chronic gastrointestinal inflammation and results from a complex set of genetic, environmental, and immunological factors. By analyzing data from the human microbiome, genetic information can be used to predict Crohn's disease. Recent advances in deep learning have demonstrated its effectiveness in feature extraction and the use of deep learning to decode genetic information for disease prediction. METHODS: In this paper, we present a deep learning-based model that utilizes a sequential convolutional attention network (SCAN) for feature extraction, incorporates adaptive additive interval losses to enhance these features, and employs support vector machines (SVM) for classification. To address the challenge of unbalanced Crohn's disease samples, we propose a random noise one-hot encoding data augmentation method. RESULTS: Data augmentation with random noise accelerates training convergence, while SCAN-SVM effectively extracts features with adaptive additive interval loss enhancing differentiation. Our approach outperforms benchmark methods, achieving an average accuracy of 0.80 and a kappa value of 0.76, and we validate the effectiveness of feature enhancement. CONCLUSIONS: In summary, we use deep feature recognition to effectively analyze the potential information in genes, which has a good application potential for gene analysis and prediction of Crohn's disease.

3.
Med Biol Eng Comput ; 2024 Oct 04.
Article de Anglais | MEDLINE | ID: mdl-39365519

RÉSUMÉ

Segmentation of organs at risks (OARs) in the thorax plays a critical role in radiation therapy for lung and esophageal cancer. Although automatic segmentation of OARs has been extensively studied, it remains challenging due to the varying sizes and shapes of organs, as well as the low contrast between the target and background. This paper proposes a cascaded FAS-UNet+ framework, which integrates convolutional neural networks and nonlinear multi-grid theory to solve a modified Mumford-shah model for segmenting OARs. This framework is equipped with an enhanced iteration block, a coarse-to-fine multiscale architecture, an iterative optimization strategy, and a model ensemble technique. The enhanced iteration block aims to extract multiscale features, while the cascade module is used to refine coarse segmentation predictions. The iterative optimization strategy improves the network parameters to avoid unfavorable local minima. An efficient data augmentation method is also developed to train the network, which significantly improves its performance. During the prediction stage, a weighted ensemble technique combines predictions from multiple models to refine the final segmentation. The proposed cascaded FAS-UNet+ framework was evaluated on the SegTHOR dataset, and the results demonstrate significant improvements in Dice score and Hausdorff Distance (HD). The Dice scores were 95.22%, 95.68%, and HD values were 0.1024, and 0.1194 for the segmentations of the aorta and heart in the official unlabeled dataset, respectively. Our code and trained models are available at https://github.com/zhuhui100/C-FASUNet-plus .

4.
Rev Cardiovasc Med ; 25(9): 335, 2024 Sep.
Article de Anglais | MEDLINE | ID: mdl-39355611

RÉSUMÉ

Background: Congenital heart diseases (CHDs), particularly atrial and ventricular septal defects, pose significant health risks and common challenges in detection via echocardiography. Doctors often employ the cardiac structural information during the diagnostic process. However, prior CHD research has not determined the influence of including cardiac structural information during the labeling process and the application of data augmentation techniques. Methods: This study utilizes advanced artificial intelligence (AI)-driven object detection frameworks, specifically You Look Only Once (YOLO)v5, YOLOv7, and YOLOv9, to assess the impact of including cardiac structural information and data augmentation techniques on the identification of septal defects in echocardiographic images. Results: The experimental results reveal that different labeling strategies substantially affect the performance of the detection models. Notably, adjustments in bounding box dimensions and the inclusion of cardiac structural details in the annotations are key factors influencing the accuracy of the model. The application of deep learning techniques in echocardiography enhances the precision of detecting septal heart defects. Conclusions: This study confirms that careful annotation of imaging data is crucial for optimizing the performance of object detection algorithms in medical imaging. These findings suggest potential pathways for refining AI applications in diagnostic cardiology studies.

5.
Front Comput Neurosci ; 18: 1360095, 2024.
Article de Anglais | MEDLINE | ID: mdl-39371524

RÉSUMÉ

Introduction: Machine Learning (ML) has emerged as a promising approach in healthcare, outperforming traditional statistical techniques. However, to establish ML as a reliable tool in clinical practice, adherence to best practices in data handling, and modeling design and assessment is crucial. In this work, we summarize and strictly adhere to such practices to ensure reproducible and reliable ML. Specifically, we focus on Alzheimer's Disease (AD) detection, a challenging problem in healthcare. Additionally, we investigate the impact of modeling choices, including different data augmentation techniques and model complexity, on overall performance. Methods: We utilize Magnetic Resonance Imaging (MRI) data from the ADNI corpus to address a binary classification problem using 3D Convolutional Neural Networks (CNNs). Data processing and modeling are specifically tailored to address data scarcity and minimize computational overhead. Within this framework, we train 15 predictive models, considering three different data augmentation strategies and five distinct 3D CNN architectures with varying convolutional layers counts. The augmentation strategies involve affine transformations, such as zoom, shift, and rotation, applied either concurrently or separately. Results: The combined effect of data augmentation and model complexity results in up to 10% variation in prediction accuracy. Notably, when affine transformation are applied separately, the model achieves higher accuracy, regardless the chosen architecture. Across all strategies, the model accuracy exhibits a concave behavior as the number of convolutional layers increases, peaking at an intermediate value. The best model reaches excellent performance both on the internal and additional external testing set. Discussions: Our work underscores the critical importance of adhering to rigorous experimental practices in the field of ML applied to healthcare. The results clearly demonstrate how data augmentation and model depth-often overlooked factors- can dramatically impact final performance if not thoroughly investigated. This highlights both the necessity of exploring neglected modeling aspects and the need to comprehensively report all modeling choices to ensure reproducibility and facilitate meaningful comparisons across studies.

6.
Neural Netw ; 181: 106753, 2024 Sep 29.
Article de Anglais | MEDLINE | ID: mdl-39378605

RÉSUMÉ

While data augmentation (DA) is generally applied to input data, several studies have reported that applying DA to hidden layers in neural networks, i.e., feature augmentation, can improve performance. However, in previous studies, the layers to which DA is applied have not been carefully considered, often being applied randomly and uniformly or only to a specific layer, leaving room for arbitrariness. Thus, in this study, we investigated the trends of suitable layers for applying DA in various experimental configurations, e.g., training from scratch, transfer learning, various dataset settings, and different models. In addition, to adjust the suitable layers for DA automatically, we propose the adaptive layer selection (AdaLASE) method, which updates the ratio to perform DA for each layer based on the gradient descent method during training. The experimental results obtained on several image classification datasets indicate that the proposed AdaLASE method altered the ratio as expected and achieved high overall test accuracy.

7.
Neural Netw ; 180: 106734, 2024 Sep 25.
Article de Anglais | MEDLINE | ID: mdl-39332212

RÉSUMÉ

It is extremely challenging to classify steady-state visual evoked potentials (SSVEPs) in scenarios characterized by a huge scarcity of calibration data where only one calibration trial is available for each stimulus target. To address this challenge, we introduce a novel approach named OS-SSVEP, which combines a dual domain cross-subject fusion network (CSDuDoFN) with the task-related and task-discriminant component analysis (TRCA and TDCA) based on data augmentation. The CSDuDoFN framework is designed to comprehensively transfer information from source subjects, while TRCA and TDCA are employed to exploit the information from the single available calibration trial of the target subject. Specifically, CSDuDoFN uses multi-reference least-squares transformation (MLST) to map data from both the source subjects and the target subject into the domain of sine-cosine templates, thereby reducing cross-subject domain gap and benefiting transfer learning. In addition, CSDuDoFN is fed with both transformed and original data, with an adequate fusion of their features occurring at different network layers. To capitalize on the calibration trial of the target subject, OS-SSVEP utilizes source aliasing matrix estimation (SAME)-based data augmentation to incorporate into the training process of the ensemble TRCA (eTRCA) and TDCA models. Ultimately, the outputs of CSDuDoFN, eTRCA, and TDCA are combined for the SSVEP classification. The effectiveness of our proposed approach is comprehensively evaluated on three publicly available SSVEP datasets, achieving the best performance on two datasets and competitive performance on the third. Further, it is worth noting that our method follows a different technical route from the current state-of-the-art (SOTA) method and the two are complementary. The performance is significantly improved when our method is combined with the SOTA method. This study underscores the potential to integrate the SSVEP-based brain-computer interface (BCI) into daily life. The corresponding source code is accessible at https://github.com/Sungden/One-shot-SSVEP-classification.

8.
Sensors (Basel) ; 24(18)2024 Sep 18.
Article de Anglais | MEDLINE | ID: mdl-39338761

RÉSUMÉ

This paper explores the use of state-of-the-art latent diffusion models, specifically stable diffusion, to generate synthetic images for improving the robustness of visual defect segmentation in manufacturing components. Given the scarcity and imbalance of real-world defect data, synthetic data generation offers a promising solution for training deep learning models. We fine-tuned stable diffusion using the LoRA technique on the NEU-seg dataset and evaluated the impact of different ratios of synthetic to real images on the training set of DeepLabV3+ and FPN segmentation models. Our results demonstrated a significant improvement in mean Intersection over Union (mIoU) when the training dataset was augmented with synthetic images. This study highlights the potential of diffusion models for enhancing the quality and diversity of training data in industrial defect detection, leading to more accurate and reliable segmentation results. The proposed approach achieved improvements of 5.95% and 6.85% in mIoU of defect segmentation on each model over the original dataset.

9.
Neural Netw ; 180: 106651, 2024 Aug 23.
Article de Anglais | MEDLINE | ID: mdl-39217862

RÉSUMÉ

Graph neural networks (GNNs) have achieved state-of-the-art performance in graph representation learning. Message passing neural networks, which learn representations through recursively aggregating information from each node and its neighbors, are among the most commonly-used GNNs. However, a wealth of structural information of individual nodes and full graphs is often ignored in such process, which restricts the expressive power of GNNs. Various graph data augmentation methods that enable the message passing with richer structure knowledge have been introduced as one main way to tackle this issue, but they are often focused on individual structure features and difficult to scale up with more structure features. In this work we propose a novel approach, namely collective structure knowledge-augmented graph neural network (CoS-GNN), in which a new message passing method is introduced to allow GNNs to harness a diverse set of node- and graph-level structure features, together with original node features/attributes, in augmented graphs. In doing so, our approach largely improves the structural knowledge modeling of GNNs in both node and graph levels, resulting in substantially improved graph representations. This is justified by extensive empirical results where CoS-GNN outperforms state-of-the-art models in various graph-level learning tasks, including graph classification, anomaly detection, and out-of-distribution generalization.

10.
Brief Bioinform ; 25(5)2024 Jul 25.
Article de Anglais | MEDLINE | ID: mdl-39288230

RÉSUMÉ

Compared with analyzing omics data from a single platform, an integrative analysis of multi-omics data provides a more comprehensive understanding of the regulatory relationships among biological features associated with complex diseases. However, most existing frameworks for integrative analysis overlook two crucial aspects of multi-omics data. Firstly, they neglect the known dependencies among biological features that exist in highly credible biological databases. Secondly, most existing integrative frameworks just simply remove the subjects without full omics data to handle block missingness, resulting in decreasing statistical power. To overcome these issues, we propose a network-based integrative Bayesian framework for biomarker selection and disease outcome prediction based on multi-omics data. Our framework utilizes Dirac spike-and-slab variable selection prior to identifying a small subset of biomarkers. The incorporation of gene pathway information improves the interpretability of feature selection. Furthermore, with the strategy in the FBM (stand for "full Bayesian model with missingness") model where missing omics data are augmented via a mechanistic model, our framework handles block missingness in multi-omics data via a data augmentation approach. The real application illustrates that our approach, which incorporates existing gene pathway information and includes subjects without DNA methylation data, results in more interpretable feature selection results and more accurate predictions.


Sujet(s)
Théorème de Bayes , Marqueurs biologiques , Humains , Marqueurs biologiques/métabolisme , Biologie informatique/méthodes , Génomique/méthodes , Réseaux de régulation génique , Algorithmes , Multi-omique
11.
ArXiv ; 2024 Sep 03.
Article de Anglais | MEDLINE | ID: mdl-39279836

RÉSUMÉ

We propose a lesion-aware graph neural network (LEGNet) to predict language ability from resting-state fMRI (rs-fMRI) connectivity in patients with post-stroke aphasia. Our model integrates three components: an edge-based learning module that encodes functional connectivity between brain regions, a lesion encoding module, and a subgraph learning module that leverages functional similarities for prediction. We use synthetic data derived from the Human Connectome Project (HCP) for hyperparameter tuning and model pretraining. We then evaluate the performance using repeated 10-fold cross-validation on an in-house neuroimaging dataset of post-stroke aphasia. Our results demonstrate that LEGNet outperforms baseline deep learning methods in predicting language ability. LEGNet also exhibits superior generalization ability when tested on a second in-house dataset that was acquired under a slightly different neuroimaging protocol. Taken together, the results of this study highlight the potential of LEGNet in effectively learning the relationships between rs-fMRI connectivity and language ability in a patient cohort with brain lesions for improved post-stroke aphasia evaluation.

12.
Spectrochim Acta A Mol Biomol Spectrosc ; 325: 125086, 2024 Sep 02.
Article de Anglais | MEDLINE | ID: mdl-39288601

RÉSUMÉ

The rapid and non-destructive detection of pesticide residues in Hami melons plays substantial importance in protecting consumer health. However, the investment of time and resources needed to procure sample data poses a challenge, often resulting in limited data set and consequently leading insufficient accuracy of the established models. In this study, an innovative variant based on generative adversarial network (GAN) was proposed, named regression GAN (RGAN). It was used to synchronically extend the visible near-infrared (VNIR) and short-wave infrared (SWIR) hyperspectral data and corresponding acetamiprid residue content data of Hami melon. The support vector regression (SVR) and partial least squares regression (PLSR) models were trained using the generated data, and subsequently validate them with real data to assess the reliability of the generated data. In addition, the generated data were added to the real data to extend the dataset. The SVR model based on SWIR-HSI data achieved the optimal performance after data augmentation, yielding the values of Rp2, RMSEP and RPD were 0.8781, 0.6962 and 2.7882, respectively. The RGAN extends the range of GAN applications from classification problems to regression problems. It serves as a valuable reference for the quantitative analysis of chemometrics.

13.
J Am Med Inform Assoc ; 31(10): 2284-2293, 2024 Oct 01.
Article de Anglais | MEDLINE | ID: mdl-39271171

RÉSUMÉ

OBJECTIVES: The aim of this study was to investigate GPT-3.5 in generating and coding medical documents with International Classification of Diseases (ICD)-10 codes for data augmentation on low-resource labels. MATERIALS AND METHODS: Employing GPT-3.5 we generated and coded 9606 discharge summaries based on lists of ICD-10 code descriptions of patients with infrequent (or generation) codes within the MIMIC-IV dataset. Combined with the baseline training set, this formed an augmented training set. Neural coding models were trained on baseline and augmented data and evaluated on an MIMIC-IV test set. We report micro- and macro-F1 scores on the full codeset, generation codes, and their families. Weak Hierarchical Confusion Matrices determined within-family and outside-of-family coding errors in the latter codesets. The coding performance of GPT-3.5 was evaluated on prompt-guided self-generated data and real MIMIC-IV data. Clinicians evaluated the clinical acceptability of the generated documents. RESULTS: Data augmentation results in slightly lower overall model performance but improves performance for the generation candidate codes and their families, including 1 absent from the baseline training data. Augmented models display lower out-of-family error rates. GPT-3.5 identifies ICD-10 codes by their prompted descriptions but underperforms on real data. Evaluators highlight the correctness of generated concepts while suffering in variety, supporting information, and narrative. DISCUSSION AND CONCLUSION: While GPT-3.5 alone given our prompt setting is unsuitable for ICD-10 coding, it supports data augmentation for training neural models. Augmentation positively affects generation code families but mainly benefits codes with existing examples. Augmentation reduces out-of-family errors. Documents generated by GPT-3.5 state prompted concepts correctly but lack variety, and authenticity in narratives.


Sujet(s)
Codage clinique , Classification internationale des maladies , Comptes-rendus de sortie des patients , Humains , Dossiers médicaux électroniques , Sortie du patient , 29935
14.
Diagn Interv Imaging ; 2024 Sep 10.
Article de Anglais | MEDLINE | ID: mdl-39261225

RÉSUMÉ

While artificial intelligence (AI) is already well established in diagnostic radiology, it is beginning to make its mark in interventional radiology. AI has the potential to dramatically change the daily practice of interventional radiology at several levels. In the preoperative setting, recent advances in deep learning models, particularly foundation models, enable effective management of multimodality and increased autonomy through their ability to function minimally without supervision. Multimodality is at the heart of patient-tailored management and in interventional radiology, this translates into the development of innovative models for patient selection and outcome prediction. In the perioperative setting, AI is manifesting itself in applications that assist radiologists in image analysis and real-time decision making, thereby improving the efficiency, accuracy, and safety of interventions. In synergy with advances in robotic technologies, AI is laying the groundwork for an increased autonomy. From a research perspective, the development of artificial health data, such as AI-based data augmentation, offers an innovative solution to this central issue and promises to stimulate research in this area. This review aims to provide the medical community with the most important current and future applications of AI in interventional radiology.

15.
Comput Biol Med ; 182: 109129, 2024 Sep 11.
Article de Anglais | MEDLINE | ID: mdl-39265478

RÉSUMÉ

Modeling and manufacturing of personalized cranial implants are important research areas that may decrease the waiting time for patients suffering from cranial damage. The modeling of personalized implants may be partially automated by the use of deep learning-based methods. However, this task suffers from difficulties with generalizability into data from previously unseen distributions that make it difficult to use the research outcomes in real clinical settings. Due to difficulties with acquiring ground-truth annotations, different techniques to improve the heterogeneity of datasets used for training the deep networks have to be considered and introduced. In this work, we present a large-scale study of several augmentation techniques, varying from classical geometric transformations, image registration, variational autoencoders, and generative adversarial networks, to the most recent advances in latent diffusion models. We show that the use of heavy data augmentation significantly increases both the quantitative and qualitative outcomes, resulting in an average Dice Score above 0.94 for the SkullBreak and above 0.96 for the SkullFix datasets. The results show that latent diffusion models combined with vector quantized variational autoencoder outperform other generative augmentation strategies. Moreover, we show that the synthetically augmented network successfully reconstructs real clinical defects, without the need to acquire costly and time-consuming annotations. The findings of the work will lead to easier, faster, and less expensive modeling of personalized cranial implants. This is beneficial to numerous people suffering from cranial injuries. The work constitutes a considerable contribution to the field of artificial intelligence in the automatic modeling of personalized cranial implants.

16.
Front Microbiol ; 15: 1453870, 2024.
Article de Anglais | MEDLINE | ID: mdl-39224212

RÉSUMÉ

The synthesis of pseudo-healthy images, involving the generation of healthy counterparts for pathological images, is crucial for data augmentation, clinical disease diagnosis, and understanding pathology-induced changes. Recently, Generative Adversarial Networks (GANs) have shown substantial promise in this domain. However, the heterogeneity of intracranial infection symptoms caused by various infections complicates the model's ability to accurately differentiate between pathological and healthy regions, leading to the loss of critical information in healthy areas and impairing the precise preservation of the subject's identity. Moreover, for images with extensive lesion areas, the pseudo-healthy images generated by these methods often lack distinct organ and tissue structures. To address these challenges, we propose a three-stage method (localization, inpainting, synthesis) that achieves nearly perfect preservation of the subject's identity through precise pseudo-healthy synthesis of the lesion region and its surroundings. The process begins with a Segmentor, which identifies the lesion areas and differentiates them from healthy regions. Subsequently, a Vague-Filler fills the lesion areas to construct a healthy outline, thereby preventing structural loss in cases of extensive lesions. Finally, leveraging this healthy outline, a Generative Adversarial Network integrated with a contextual residual attention module generates a more realistic and clearer image. Our method was validated through extensive experiments across different modalities within the BraTS2021 dataset, achieving a healthiness score of 0.957. The visual quality of the generated images markedly exceeded those produced by competing methods, with enhanced capabilities in repairing large lesion areas. Further testing on the COVID-19-20 dataset showed that our model could effectively partially reconstruct images of other organs.

17.
Heliyon ; 10(16): e35965, 2024 Aug 30.
Article de Anglais | MEDLINE | ID: mdl-39224347

RÉSUMÉ

With the development of automated malware toolkits, cybersecurity faces evolving threats. Although visualization-based malware analysis has proven to be an effective method, existing approaches struggle with challenging malware samples due to alterations in the texture features of binary images during the visualization preprocessing stage, resulting in poor performance. Furthermore, to enhance classification accuracy, existing methods sacrifice prediction time by designing deeper neural network architectures. This paper proposes PAFE, a lightweight and visualization-based rapid malware classification method. It addresses the issue of texture feature variations in preprocessing through pixel-filling techniques and applies data augmentation to overcome the challenges of class imbalance in small sample datasets. PAFE combines multi-scale feature fusion and a channel attention mechanism, enhancing feature expression through modular design. Extensive experimental results demonstrate that PAFE outperforms the current state-of-the-art methods in both efficiency and effectiveness for malware variant classification, achieving an accuracy rate of 99.25 % with a prediction time of 10.04 ms.

18.
Neural Netw ; 180: 106665, 2024 Aug 28.
Article de Anglais | MEDLINE | ID: mdl-39241437

RÉSUMÉ

In brain-computer interface (BCI), building accurate electroencephalogram (EEG) classifiers for specific mental tasks is critical for BCI performance. The classifiers are developed by machine learning (ML) and deep learning (DL) techniques, requiring a large dataset for training to build reliable and accurate models. However, collecting large enough EEG datasets is difficult due to intra-/inter-subject variabilities and experimental costs. This leads to the data scarcity problem, which causes overfitting issues to training samples, resulting in reducing generalization performance. To solve the EEG data scarcity problem and improve the performance of the EEG classifiers, we propose a novel EEG data augmentation (DA) framework using conditional generative adversarial networks (cGANs). An experimental study is implemented with two public EEG datasets, including motor imagery (MI) tasks (BCI competition IV IIa and III IVa), to validate the effectiveness of the proposed EEG DA method for the EEG classifiers. To evaluate the proposed cGAN-based DA method, we tested eight EEG classifiers for the experiment, including traditional MLs and state-of-the-art DLs with three existing EEG DA methods. Experimental results showed that most DA methods with proper DA proportion in the training dataset had higher classification performances than without DA. Moreover, applying the proposed DA method showed superior classification performance improvement than the other DA methods. This shows that the proposed method is a promising EEG DA method for enhancing the performances of the EEG classifiers in MI-based BCIs.

19.
Methods ; 231: 8-14, 2024 Sep 04.
Article de Anglais | MEDLINE | ID: mdl-39241919

RÉSUMÉ

Biomedical event causal relation extraction (BECRE), as a subtask of biomedical information extraction, aims to extract event causal relation facts from unstructured biomedical texts and plays an essential role in many downstream tasks. The existing works have two main problems: i) Only shallow features are limited in helping the model establish potential relationships between biomedical events. ii) Using the traditional oversampling method to solve the data imbalance problem of the BECRE tasks ignores the requirements for data diversifying. This paper proposes a novel biomedical event causal relation extraction method to solve the above problems using deep knowledge fusion and Roberta-based data augmentation. To address the first problem, we fuse deep knowledge, including structural event representation and entity relation path, for establishing potential semantic connections between biomedical events. We use the Graph Convolutional Neural network (GCN) and the predicated tensor model to acquire structural event representation, and entity relation paths are encoded based on the external knowledge bases (GTD, CDR, CHR, GDA and UMLS). We introduce the triplet attention mechanism to fuse structural event representation and entity relation path information. Besides, this paper proposes the Roberta-based data augmentation method to address the second problem, some words of biomedical text, except biomedical events, are masked proportionally and randomly, and then pre-trained Roberta generates data instances for the imbalance BECRE dataset. Extensive experimental results on Hahn-Powell's and BioCause datasets confirm that the proposed method achieves state-of-the-art performance compared to current advances.

20.
BMC Med Imaging ; 24(1): 230, 2024 Sep 02.
Article de Anglais | MEDLINE | ID: mdl-39223507

RÉSUMÉ

Breast cancer is a leading cause of mortality among women globally, necessitating precise classification of breast ultrasound images for early diagnosis and treatment. Traditional methods using CNN architectures such as VGG, ResNet, and DenseNet, though somewhat effective, often struggle with class imbalances and subtle texture variations, leading to reduced accuracy for minority classes such as malignant tumors. To address these issues, we propose a methodology that leverages EfficientNet-B7, a scalable CNN architecture, combined with advanced data augmentation techniques to enhance minority class representation and improve model robustness. Our approach involves fine-tuning EfficientNet-B7 on the BUSI dataset, implementing RandomHorizontalFlip, RandomRotation, and ColorJitter to balance the dataset and improve model robustness. The training process includes early stopping to prevent overfitting and optimize performance metrics. Additionally, we integrate Explainable AI (XAI) techniques, such as Grad-CAM, to enhance the interpretability and transparency of the model's predictions, providing visual and quantitative insights into the features and regions of ultrasound images influencing classification outcomes. Our model achieves a classification accuracy of 99.14%, significantly outperforming existing CNN-based approaches in breast ultrasound image classification. The incorporation of XAI techniques enhances our understanding of the model's decision-making process, thereby increasing its reliability and facilitating clinical adoption. This comprehensive framework offers a robust and interpretable tool for the early detection and diagnosis of breast cancer, advancing the capabilities of automated diagnostic systems and supporting clinical decision-making processes.


Sujet(s)
Tumeurs du sein , Échographie mammaire , Humains , Tumeurs du sein/imagerie diagnostique , Femelle , Échographie mammaire/méthodes , Interprétation d'images assistée par ordinateur/méthodes , 29935 , Intelligence artificielle
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE