Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 814
Filter
1.
Comput Biol Med ; 180: 108958, 2024 Jul 31.
Article in English | MEDLINE | ID: mdl-39094325

ABSTRACT

Hematoxylin and eosin (H&E) staining is a crucial technique for diagnosing glioma, allowing direct observation of tissue structures. However, the H&E staining workflow necessitates intricate processing, specialized laboratory infrastructures, and specialist pathologists, rendering it expensive, labor-intensive, and time-consuming. In view of these considerations, we combine the deep learning method and hyperspectral imaging technique, aiming at accurately and rapidly converting the hyperspectral images into virtual H&E staining images. The method overcomes the limitations of H&E staining by capturing tissue information at different wavelengths, providing comprehensive and detailed tissue composition information as the realistic H&E staining. In comparison with various generator structures, the Unet exhibits substantial overall advantages, as evidenced by a mean structure similarity index measure (SSIM) of 0.7731 and a peak signal-to-noise ratio (PSNR) of 23.3120, as well as the shortest training and inference time. A comprehensive software system for virtual H&E staining, which integrates CCD control, microscope control, and virtual H&E staining technology, is developed to facilitate fast intraoperative imaging, promote disease diagnosis, and accelerate the development of medical automation. The platform reconstructs large-scale virtual H&E staining images of gliomas at a high speed of 3.81 mm2/s. This innovative approach will pave the way for a novel, expedited route in histological staining.

2.
Data Brief ; 55: 110671, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39049972

ABSTRACT

Batik holds profound cultural significance within Indonesia, serving as a tangible expression of the nation's rich heritage and intricate philosophical narratives. This paper introduces the Batik Nitik Sarimbit 120 dataset, originating from Yogyakarta, Indonesia, as a pivotal resource for researchers and enthusiasts alike. Comprising images of 60 Nitik patterns meticulously sourced from fabric samples, this dataset represents a curated selection of batik motifs emblematic of the region's artistic tradition. The Batik Nitik Sarimbit 120 dataset offers a comprehensive collection of 120 motif pairs distributed across 60 distinct categories. By providing a comprehensive repository of batik motifs, the Batik Nitik Sarimbit 120 dataset facilitates the training and validation of machine learning algorithms, particularly through the utilization of generative method. This enables researchers to explore and innovate in the realm of batik pattern generation, fostering new avenues for creativity and expression within this venerable art form. In essence, the Batik Nitik Sarimbit 120 dataset stands as a testament to the collaborative efforts of cultural institutions and academia in preserving and promoting Indonesia's rich batik heritage. Its accessibility and richness make it a valuable resource for scholars, artists, and enthusiasts seeking to delve deeper into the intricate world of Indonesian batik.

3.
Med Image Anal ; 97: 103267, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-39053167

ABSTRACT

Pelvic fracture is a severe trauma with life-threatening implications. Surgical reduction is essential for restoring the anatomical structure and functional integrity of the pelvis, requiring accurate preoperative planning. However, the complexity of pelvic fractures and limited data availability necessitate labor-intensive manual corrections in a clinical setting. We describe in this paper a novel bidirectional framework for automatic pelvic fracture surgical planning based on fracture simulation and structure restoration. Our fracture simulation method accounts for patient-specific pelvic structures, bone density information, and the randomness of fractures, enabling the generation of various types of fracture cases from healthy pelvises. Based on these features and on adversarial learning, we develop a novel structure restoration network to predict the deformation mapping in CT images before and after a fracture for the precise structural reconstruction of any fracture. Furthermore, a self-supervised strategy based on pelvic anatomical symmetry priors is developed to optimize the details of the restored pelvic structure. Finally, the restored pelvis is used as a template to generate a surgical reduction plan in which the fragments are repositioned in an efficient jigsaw puzzle registration manner. Extensive experiments on simulated and clinical datasets, including scans with metal artifacts, show that our method achieves good accuracy and robustness: a mean SSIM of 90.7% for restorations, with translational errors of 2.88 mm and rotational errors of 3.18°for reductions in real datasets. Our method takes 52.9 s to complete the surgical planning in the phantom study, representing a significant acceleration compared to standard clinical workflows. Our method may facilitate effective surgical planning for pelvic fractures tailored to individual patients in clinical settings.

4.
Comput Biol Med ; 179: 108913, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39047508

ABSTRACT

Machine learning has been employed in recognizing protein localization at the subcellular level, which highly facilitates the protein function studies, especially for those multi-label proteins that localize in more than one organelle. However, existing works mostly study the qualitative classification of protein subcellular locations, ignoring fraction of one multi-label protein in different locations. Actually, about 50 % proteins are multi-label proteins, and the ignorance of quantitative information highly restricts the understanding of their spatial distribution and functional mechanism. One reason of the lack of quantitative study is the insufficiency of quantitative annotations. To address the data shortage problem, here we proposed a generative model, PLocGAN, which could generate cell images with conditional quantitative annotation of the fluorescence distribution. The model was a conditional generative adversarial network, in which the condition learning utilized partial label learning to overcome the lack of training labels and allowed training with only qualitative labels. Meanwhile, it used contrastive learning to enhance diversity of the generated images. We assessed the PLocGAN on four pixel-fused synthetic datasets and one real dataset, and demonstrated that the model could generate images with good fidelity and diversity, outperforming existing state-of-the-art generative methods. To verify the utility of PLocGAN in the quantitative prediction of protein subcellular locations, we replaced the training images with generated quantitative images and built prediction models, and found that they had a boosting effect on the quantitative estimation. This work demonstrates the effectiveness of deep generative models in bioimage analysis, and provides a new solution for quantitative subcellular proteomics.

5.
Diagnostics (Basel) ; 14(13)2024 Jun 21.
Article in English | MEDLINE | ID: mdl-39001209

ABSTRACT

During neurosurgical procedures, the neuro-navigation system's accuracy is affected by the brain shift phenomenon. One popular strategy is to compensate for brain shift using intraoperative ultrasound (iUS) registration with pre-operative magnetic resonance (MR) scans. This requires a satisfactory multimodal image registration method, which is challenging due to the low image quality of ultrasound and the unpredictable nature of brain deformation during surgery. In this paper, we propose an automatic unsupervised end-to-end MR-iUS registration approach named the Dual Discriminator Bayesian Generative Adversarial Network (D2BGAN). The proposed network consists of two discriminators and a generator optimized by a Bayesian loss function to improve the functionality of the generator, and we add a mutual information loss function to the discriminator for similarity measurements. Extensive validation was performed on the RESECT and BITE datasets, where the mean target registration error (mTRE) of MR-iUS registration using D2BGAN was determined to be 0.75 ± 0.3 mm. The D2BGAN illustrated a clear advantage by achieving an 85% improvement in the mTRE over the initial error. Moreover, the results confirmed that the proposed Bayesian loss function, rather than the typical loss function, improved the accuracy of MR-iUS registration by 23%. The improvement in registration accuracy was further enhanced by the preservation of the intensity and anatomical information of the input images.

6.
Front Hum Neurosci ; 18: 1430086, 2024.
Article in English | MEDLINE | ID: mdl-39010893

ABSTRACT

Background: Emerging brain-computer interface (BCI) technology holds promising potential to enhance the quality of life for individuals with disabilities. Nevertheless, the constrained accuracy of electroencephalography (EEG) signal classification poses numerous hurdles in real-world applications. Methods: In response to this predicament, we introduce a novel EEG signal classification model termed EEGGAN-Net, leveraging a data augmentation framework. By incorporating Conditional Generative Adversarial Network (CGAN) data augmentation, a cropped training strategy and a Squeeze-and-Excitation (SE) attention mechanism, EEGGAN-Net adeptly assimilates crucial features from the data, consequently enhancing classification efficacy across diverse BCI tasks. Results: The EEGGAN-Net model exhibits notable performance metrics on the BCI Competition IV-2a and IV-2b datasets. Specifically, it achieves a classification accuracy of 81.3% with a kappa value of 0.751 on the IV-2a dataset, and a classification accuracy of 90.3% with a kappa value of 0.79 on the IV-2b dataset. Remarkably, these results surpass those of four other CNN-based decoding models. Conclusions: In conclusion, the amalgamation of data augmentation and attention mechanisms proves instrumental in acquiring generalized features from EEG signals, ultimately elevating the overall proficiency of EEG signal classification.

7.
Front Immunol ; 15: 1404640, 2024.
Article in English | MEDLINE | ID: mdl-39007128

ABSTRACT

Introduction: Deep learning (DL) models predicting biomarker expression in images of hematoxylin and eosin (H&E)-stained tissues can improve access to multi-marker immunophenotyping, crucial for therapeutic monitoring, biomarker discovery, and personalized treatment development. Conventionally, these models are trained on ground truth cell labels derived from IHC-stained tissue sections adjacent to H&E-stained ones, which might be less accurate than labels from the same section. Although many such DL models have been developed, the impact of ground truth cell label derivation methods on their performance has not been studied. Methodology: In this study, we assess the impact of cell label derivation on H&E model performance, with CD3+ T-cells in lung cancer tissues as a proof-of-concept. We compare two Pix2Pix generative adversarial network (P2P-GAN)-based virtual staining models: one trained with cell labels obtained from the same tissue section as the H&E-stained section (the 'same-section' model) and one trained on cell labels from an adjacent tissue section (the 'serial-section' model). Results: We show that the same-section model exhibited significantly improved prediction performance compared to the 'serial-section' model. Furthermore, the same-section model outperformed the serial-section model in stratifying lung cancer patients within a public lung cancer cohort based on survival outcomes, demonstrating its potential clinical utility. Discussion: Collectively, our findings suggest that employing ground truth cell labels obtained through the same-section approach boosts immunophenotyping DL solutions.


Subject(s)
Deep Learning , Immunophenotyping , Lung Neoplasms , Staining and Labeling , Humans , Lung Neoplasms/immunology , Lung Neoplasms/pathology , Staining and Labeling/methods , Biomarkers, Tumor/metabolism , Male , T-Lymphocytes/immunology , Female
8.
Med Phys ; 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39008794

ABSTRACT

BACKGROUND: Vessel-wall volume and localized three-dimensional ultrasound (3DUS) metrics are sensitive to the change of carotid atherosclerosis in response to medical/dietary interventions. Manual segmentation of the media-adventitia boundary (MAB) and lumen-intima boundary (LIB) required to obtain these metrics is time-consuming and prone to observer variability. Although supervised deep-learning segmentation models have been proposed, training of these models requires a sizeable manually segmented training set, making larger clinical studies prohibitive. PURPOSE: We aim to develop a method to optimize pre-trained segmentation models without requiring manual segmentation to supervise the fine-tuning process. METHODS: We developed an adversarial framework called the unsupervised shape-and-texture generative adversarial network (USTGAN) to fine-tune a convolutional neural network (CNN) pre-trained on a source dataset for accurate segmentation of a target dataset. The network integrates a novel texture-based discriminator with a shape-based discriminator, which together provide feedback for the CNN to segment the target images in a similar way as the source images. The texture-based discriminator increases the accuracy of the CNN in locating the artery, thereby lowering the number of failed segmentations. Failed segmentation was further reduced by a self-checking mechanism to flag longitudinal discontinuity of the artery and by self-correction strategies involving surface interpolation followed by a case-specific tuning of the CNN. The U-Net was pre-trained by the source dataset involving 224 3DUS volumes with 136, 44, and 44 volumes in the training, validation and testing sets. The training of USTGAN involved the same training group of 136 volumes in the source dataset and 533 volumes in the target dataset. No segmented boundaries for the target cohort were available for training USTGAN. The validation and testing of USTGAN involved 118 and 104 volumes from the target cohort, respectively. The segmentation accuracy was quantified by Dice Similarity Coefficient (DSC), and incorrect localization rate (ILR). Tukey's Honestly Significant Difference multiple comparison test was employed to quantify the difference of DSCs between models and settings, where p ≤ 0.05 $p\,\le \,0.05$ was considered statistically significant. RESULTS: USTGAN attained a DSC of 85.7 ± 13.0 $85.7\,\pm \,13.0$ % in LIB and 86.2 ± 10.6 ${86.2}\,\pm \,{10.6}$ % in MAB, improving from the baseline performance of 74.6 ± 30.7 ${74.6}\,\pm \,{30.7}$ % in LIB (p < 10 - 12 $<10^{-12}$ ) and 75.7 ± 28.9 ${75.7}\,\pm \,{28.9}$ % in MAB (p < 10 - 12 $<10^{-12}$ ). Our approach outperformed six state-of-the-art domain-adaptation models (MAB: p ≤ 3.63 × 10 - 7 $p \le 3.63\,\times \,10^{-7}$ , LIB: p ≤ 9.34 × 10 - 8 $p\,\le \,9.34\,\times \,10^{-8}$ ). The proposed USTGAN also had the lowest ILR among the methods compared (LIB: 2.5%, MAB: 1.7%). CONCLUSION: Our framework improves segmentation generalizability, thereby facilitating efficient carotid disease monitoring in multicenter trials and in clinics with less expertise in 3DUS imaging.

9.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38980369

ABSTRACT

Recent studies have extensively used deep learning algorithms to analyze gene expression to predict disease diagnosis, treatment effectiveness, and survival outcomes. Survival analysis studies on diseases with high mortality rates, such as cancer, are indispensable. However, deep learning models are plagued by overfitting owing to the limited sample size relative to the large number of genes. Consequently, the latest style-transfer deep generative models have been implemented to generate gene expression data. However, these models are limited in their applicability for clinical purposes because they generate only transcriptomic data. Therefore, this study proposes ctGAN, which enables the combined transformation of gene expression and survival data using a generative adversarial network (GAN). ctGAN improves survival analysis by augmenting data through style transformations between breast cancer and 11 other cancer types. We evaluated the concordance index (C-index) enhancements compared with previous models to demonstrate its superiority. Performance improvements were observed in nine of the 11 cancer types. Moreover, ctGAN outperformed previous models in seven out of the 11 cancer types, with colon adenocarcinoma (COAD) exhibiting the most significant improvement (median C-index increase of ~15.70%). Furthermore, integrating the generated COAD enhanced the log-rank p-value (0.041) compared with using only the real COAD (p-value = 0.797). Based on the data distribution, we demonstrated that the model generated highly plausible data. In clustering evaluation, ctGAN exhibited the highest performance in most cases (89.62%). These findings suggest that ctGAN can be meaningfully utilized to predict disease progression and select personalized treatments in the medical field.


Subject(s)
Deep Learning , Humans , Survival Analysis , Algorithms , Neoplasms/genetics , Neoplasms/mortality , Gene Expression Profiling/methods , Neural Networks, Computer , Computational Biology/methods , Breast Neoplasms/genetics , Breast Neoplasms/mortality , Female , Gene Expression Regulation, Neoplastic
10.
Sensors (Basel) ; 24(14)2024 Jul 14.
Article in English | MEDLINE | ID: mdl-39065962

ABSTRACT

Communication signal reconstruction technology represents a critical area of research within communication countermeasures and signal processing. Considering traditional OFDM signal reconstruction methods' intricacy and suboptimal reconstruction performance, a dual discriminator CGAN model incorporating LSTM and Transformer is proposed. When reconstructing OFDM signals using the traditional CNN network, it becomes challenging to extract intricate temporal information. Therefore, the BiLSTM network is incorporated into the first discriminator to capture timing details of the IQ (In-phase and Quadrature-phase) sequence and constellation map information of the AP (Amplitude and Phase) sequence. Subsequently, following the addition of fixed position coding, these data are fed into the core network constructed based on the Transformer Encoder for further learning. Simultaneously, to capture the correlation between the two IQ signals, the VIT (Vision in Transformer) concept is incorporated into the second discriminator. The IQ sequence is treated as a single-channel two-dimensional image and segmented into pixel blocks containing IQ sequence through Conv2d. Fixed position coding is added and sent to the Transformer core network for learning. The generator network transforms input noise data into a dimensional space aligned with the IQ signal and embedding vector dimensions. It appends identical position encoding information to the IQ sequence before sending it to the Transformer network. The experimental results demonstrate that, under commonly utilized OFDM modulation formats such as BPSK, QPSK, and 16QAM, the time series waveform, constellation diagram, and spectral diagram exhibit high-quality reconstruction. Our algorithm achieves improved signal quality while managing complexity compared to other reconstruction methods.

11.
Sensors (Basel) ; 24(14)2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39066083

ABSTRACT

Infrared images hold significant value in applications such as remote sensing and fire safety. However, infrared detectors often face the problem of high hardware costs, which limits their widespread use. Advancements in deep learning have spurred innovative approaches to image super-resolution (SR), but comparatively few efforts have been dedicated to the exploration of infrared images. To address this, we design the Residual Swin Transformer and Average Pooling Block (RSTAB) and propose the SwinAIR, which can effectively extract and fuse the diverse frequency features in infrared images and achieve superior SR reconstruction performance. By further integrating SwinAIR with U-Net, we propose the SwinAIR-GAN for real infrared image SR reconstruction. SwinAIR-GAN extends the degradation space to better simulate the degradation process of real infrared images. Additionally, it incorporates spectral normalization, dropout, and artifact discrimination loss to reduce the potential image artifacts. Qualitative and quantitative evaluations on various datasets confirm the effectiveness of our proposed method in reconstructing realistic textures and details of infrared images.

12.
Phys Med Biol ; 69(14)2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38979700

ABSTRACT

Objective.In helical tomotherapy, image-guided radiotherapy employs megavoltage computed tomography (MVCT) for precise targeting. However, the high voltage of megavoltage radiation introduces substantial noise, significantly compromising MVCT image clarity. This study aims to enhance MVCT image quality using a deep learning-based denoising method.Approach.We propose an unpaired MVCT denoising network using a coupled generative adversarial network framework (DeCoGAN). Our approach assumes that a universal latent code within a shared latent space can reconstruct any given pair of images. By employing an encoder, we enforce this shared-latent space constraint, facilitating the conversion of low-quality (noisy) MVCT images into high-quality (denoised) counterparts. The network learns the joint distribution of images from both domains by leveraging samples from their respective marginal distributions, enhanced by adversarial training for effective denoising.Main Results.Compared to an analytical algorithm (BM3D) and three deep learning-based methods (RED-CNN, WGAN-VGG and CycleGAN), the proposed method excels in preserving image details and enhancing human visual perception by removing most noise and retaining structural features. Quantitative analysis demonstrates that our method achieves the highest peak signal-to-noise ratio and Structural Similarity Index Measurement values, indicating superior denoising performance.Significance.The proposed DeCoGAN method shows remarkable MVCT denoising performance, making it a promising tool in the field of radiation therapy.


Subject(s)
Image Processing, Computer-Assisted , Signal-To-Noise Ratio , Image Processing, Computer-Assisted/methods , Humans , Tomography, X-Ray Computed , Deep Learning , Radiotherapy, Image-Guided/methods , Neural Networks, Computer
13.
Network ; : 1-25, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38953316

ABSTRACT

Groundnut is a noteworthy oilseed crop. Attacks by leaf diseases are one of the most important reasons causing low yield and loss of groundnut plant growth, which will directly diminish the yield and quality. Therefore, an Optimized Wasserstein Deep Convolutional Generative Adversarial Network fostered Groundnut Leaf Disease Identification System (GLDI-WDCGAN-AOA) is proposed in this paper. The pre-processed output is fed to Hesitant Fuzzy Linguistic Bi-objective Clustering (HFL-BOC) for segmentation. By using Wasserstein Deep Convolutional Generative Adversarial Network (WDCGAN), the input leaf images are classified into Healthy leaf, early leaf spot, late leaf spot, nutrition deficiency, and rust. Finally, the weight parameters of WDCGAN are optimized by Aquila Optimization Algorithm (AOA) to achieve high accuracy. The proposed GLDI-WDCGAN-AOA approach provides 23.51%, 22.01%, and 18.65% higher accuracy and 24.78%, 23.24%, and 28.98% lower error rate analysed with existing methods, such as Real-time automated identification and categorization of groundnut leaf disease utilizing hybrid machine learning methods (GLDI-DNN), Online identification of peanut leaf diseases utilizing the data balancing method along deep transfer learning (GLDI-LWCNN), and deep learning-driven method depending on progressive scaling method for the precise categorization of groundnut leaf infections (GLDI-CNN), respectively.

14.
Technol Health Care ; 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38968065

ABSTRACT

BACKGROUND: Medical imaging techniques have improved to the point where security has become a basic requirement for all applications to ensure data security and data transmission over the internet. However, clinical images hold personal and sensitive data related to the patients and their disclosure has a negative impact on their right to privacy as well as legal ramifications for hospitals. OBJECTIVE: In this research, a novel deep learning-based key generation network (Deep-KEDI) is designed to produce the secure key used for decrypting and encrypting medical images. METHODS: Initially, medical images are pre-processed by adding the speckle noise using discrete ripplet transform before encryption and are removed after decryption for more security. In the Deep-KEDI model, the zigzag generative adversarial network (ZZ-GAN) is used as the learning network to generate the secret key. RESULTS: The proposed ZZ-GAN is used for secure encryption by generating three different zigzag patterns (vertical, horizontal, diagonal) of encrypted images with its key. The zigzag cipher uses an XOR operation in both encryption and decryption using the proposed ZZ-GAN. Encrypting the original image requires a secret key generated during encryption. After identification, the encrypted image is decrypted using the generated key to reverse the encryption process. Finally, speckle noise is removed from the encrypted image in order to reconstruct the original image. CONCLUSION: According to the experiments, the Deep-KEDI model generates secret keys with an information entropy of 7.45 that is particularly suitable for securing medical images.

15.
Comput Biol Med ; 180: 108952, 2024 Jul 30.
Article in English | MEDLINE | ID: mdl-39084049

ABSTRACT

Despite the growing adoption of wearable photoplethysmography (PPG) devices in personal health management, their measurement accuracy remains limited due to susceptibility to noise. This paper proposes a novel signal completion technique using generative adversarial networks that ensures both global and local consistency. Our approach innovatively addresses both short- and long-term PPG variations to restore waveforms while maintaining waveform consistency within and between pulses. We evaluated our model by removing up to 50 % of segments from segmented PPG waveforms and comparing the original and reconstructed waveforms, including systolic peak information. The results demonstrate that our method accurately reconstructs waveforms with high fidelity, producing natural and seamless transitions without discontinuities at reconstructed boundaries. Additionally, the reconstructed waveforms preserve typical PPG shapes with minimal distortion, underscoring the effectiveness and novelty of our technique.

16.
Genome Biol ; 25(1): 198, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-39075536

ABSTRACT

Single-cell multi-omics data reveal complex cellular states, providing significant insights into cellular dynamics and disease. Yet, integration of multi-omics data presents challenges. Some modalities have not reached the robustness or clarity of established transcriptomics. Coupled with data scarcity for less established modalities and integration intricacies, these challenges limit our ability to maximize single-cell omics benefits. We introduce scCross, a tool leveraging variational autoencoders, generative adversarial networks, and the mutual nearest neighbors (MNN) technique for modality alignment. By enabling single-cell cross-modal data generation, multi-omics data simulation, and in silico cellular perturbations, scCross enhances the utility of single-cell multi-omics studies.


Subject(s)
Single-Cell Analysis , Single-Cell Analysis/methods , Humans , Computer Simulation , Genomics/methods , Software , Computational Biology/methods , Multiomics
17.
Photochem Photobiol ; 2024 Jul 30.
Article in English | MEDLINE | ID: mdl-39080818

ABSTRACT

In oncology, melanoma is a serious concern, often arising from DNA changes caused mainly by ultraviolet radiation. This cancer is known for its aggressive growth, highlighting the necessity of early detection. Our research introduces a novel deep learning framework for melanoma classification, trained and validated using the extensive SIIM-ISIC Melanoma Classification Challenge-ISIC-2020 dataset. The framework features three dilated convolution layers that extract critical feature vectors for classification. A key aspect of our model is incorporating the Off-policy Proximal Policy Optimization (Off-policy PPO) algorithm, which effectively handles data imbalance in the training set by rewarding the accurate classification of underrepresented samples. In this framework, the model is visualized as an agent making a series of decisions, where each sample represents a distinct state. Additionally, a Generative Adversarial Network (GAN) augments training data to improve generalizability, paired with a new regularization technique to stabilize GAN training and prevent mode collapse. The model achieved an F-measure of 91.836% and a geometric mean of 91.920%, surpassing existing models and demonstrating the model's practical utility in clinical environments. These results demonstrate its potential in enhancing early melanoma detection and informing more accurate treatment approaches, significantly advancing in combating this aggressive cancer.

18.
Electromagn Biol Med ; : 1-15, 2024 Jul 30.
Article in English | MEDLINE | ID: mdl-39081005

ABSTRACT

Efficient and accurate classification of brain tumor categories remains a critical challenge in medical imaging. While existing techniques have made strides, their reliance on generic features often leads to suboptimal results. To overcome these issues, Multimodal Contrastive Domain Sharing Generative Adversarial Network for Improved Brain Tumor Classification Based on Efficient Invariant Feature Centric Growth Analysis (MCDS-GNN-IBTC-CGA) is proposed in this manuscript.Here, the input imagesare amassed from brain tumor dataset. Then the input images are preprocesssed using Range - Doppler Matched Filter (RDMF) for improving the quality of the image. Then Ternary Pattern and Discrete Wavelet Transforms (TPDWT) is employed for feature extraction and focusing on white, gray mass, edge correlation, and depth features. The proposed method leverages Multimodal Contrastive Domain Sharing Generative Adversarial Network (MCDS-GNN) to categorize brain tumor images into Glioma, Meningioma, and Pituitary tumors. Finally, Coati Optimization Algorithm (COA) optimizes MCDS-GNN's weight parameters. The proposed MCDS-GNN-IBTC-CGA is empirically evaluated utilizing accuracy, specificity, sensitivity, Precision, F1-score,Mean Square Error (MSE). Here, MCDS-GNN-IBTC-CGA attains 12.75%, 11.39%, 13.35%, 11.42% and 12.98% greater accuracy comparing to the existingstate-of-the-arts techniques, likeMRI brain tumor categorization utilizing parallel deep convolutional neural networks (PDCNN-BTC), attention-guided convolutional neural network for the categorization of braintumor (AGCNN-BTC), intelligent driven deep residual learning method for the categorization of braintumor (DCRN-BTC),fully convolutional neural networks method for the classification of braintumor (FCNN-BTC), Convolutional Neural Network and Multi-Layer Perceptron based brain tumor classification (CNN-MLP-BTC) respectively.


The proposed MCDS-GNN-IBTC-CGA method starts by cleaning brain tumor images with RDMF and extracting features using TPDWT, focusing on color and texture. Subsequently, the MCDS-GNN artificial intelligence system categorizes tumors into types like Glioma and Meningioma. To enhance accuracy, COA fine-tunes the MCDS-GNN parameters. Ultimately, this approach aids in more effective diagnosis and treatment of brain tumors.

19.
Sci Total Environ ; 947: 174469, 2024 Oct 15.
Article in English | MEDLINE | ID: mdl-38972419

ABSTRACT

Understanding the transformation process of dissolved organic matter (DOM) in the sewer is imperative for comprehending material circulation and energy flow within the sewer. The machine learning (ML) model provides a feasible way to comprehend and simulate the DOM transformation process in the sewer. In contrast, the model accuracy is limited by data restriction. In this study, a novel framework by integrating generative adversarial network algorithm-machine learning models (GAN-ML) was established to overcome the drawbacks caused by the data restriction in the simulation of the DOM transformation process, and humification index (HIX) was selected as the output variable to evaluate the model performance. Results indicate that the GAN algorithm's virtual dataset could generally enhance the simulation performance of regression models, deep learning models, and ensemble models for the DOM transformation process. The highest prediction accuracy on HIX (R2 of 0.5389 and RMSE of 0.0273) was achieved by the adaptive boosting model which belongs to ensemble models trained by the virtual dataset of 1000 samples. Interpretability analysis revealed that dissolved oxygen (DO) and pH emerge as critical factors warranting attention for the future development of management strategies to regulate the DOM transformation process in sewers. The integrated framework proposed a potential approach for the comprehensive understanding and high-precision simulation of the DOM transformation process, paving the way for advancing sewer management strategy under data restriction.

20.
Magn Reson Imaging ; 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39033886

ABSTRACT

OBJECTIVES: This study aims to generate post-contrast MR images reducing the exposure of gadolinium-based contrast agents (GBCAs) for brainstem glioma (BSG) detection, simultaneously delineating the BSG lesion, and providing high-resolution contrast information. METHODS: A retrospective cohort of 30 patients diagnosed with brainstem glioma was included. Multi-contrast images, including pre-contrast T1 weighted (pre-T1w), T2 weighted (T2w), arterial spin labeling (ASL) and post-contrast T1w images, were collected. A multi-task generative model was developed to synthesize post-contrast T1w images and simultaneously segment BSG masks from the multi-contrast inputs. Performance evaluation was conducted using peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mean absolute error (MAE) metrics. A perceptual study was also undertaken to assess diagnostic quality. RESULTS: The proposed model achieved SSIM of 0.86 ±â€¯0.04, PSNR of 26.33 ±â€¯0.05 and MAE of 57.20 ±â€¯20.50 for post-contrast T1w image synthesis. Automated delineation of the BSG lesions achieved Dice similarity coefficient (DSC) score of 0.88 ±â€¯0.27. CONCLUSIONS: The proposed model can synthesize high-quality post-contrast T1w images and accurately segment the BSG region, yielding satisfactory DSC scores. CLINICAL RELEVANCE STATEMENT: The synthesized post-contrast MR image presented in this study has the potential to reduce the usage of gadolinium-based contrast agents, which may pose risks to patients. Moreover, the automated segmentation method proposed in this paper aids radiologists in accurately identifying the brainstem glioma lesion, facilitating the diagnostic process.

SELECTION OF CITATIONS
SEARCH DETAIL