Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 694
Filtrar
1.
Proc Natl Acad Sci U S A ; 120(45): e2308698120, 2023 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-37922326

RESUMEN

Block polymers are an attractive platform for uncovering the factors that give rise to self-assembly in soft matter owing to their relatively simple thermodynamic description, as captured in self-consistent field theory (SCFT). SCFT historically has found great success explaining experimental data, allowing one to construct phase diagrams from a set of candidate phases, and there is now strong interest in deploying SCFT as a screening tool to guide experimental design. However, using SCFT for phase discovery leads to a conundrum: How does one discover a new morphology if the set of candidate phases needs to be specified in advance? This long-standing challenge was surmounted by training a deep convolutional generative adversarial network (GAN) with trajectories from converged SCFT solutions, and then deploying the GAN to generate input fields for subsequent SCFT calculations. The power of this approach is demonstrated for network phase formation in neat diblock copolymer melts via SCFT. A training set of only five networks produced 349 candidate phases spanning known and previously unexplored morphologies, including a chiral network. This computational pipeline, constructed here entirely from open-source codes, should find widespread application in block polymer phase discovery and other forms of soft matter.

2.
Brief Bioinform ; 24(6)2023 09 22.
Artículo en Inglés | MEDLINE | ID: mdl-37756592

RESUMEN

The prediction of prognostic outcome is critical for the development of efficient cancer therapeutics and potential personalized medicine. However, due to the heterogeneity and diversity of multimodal data of cancer, data integration and feature selection remain a challenge for prognostic outcome prediction. We proposed a deep learning method with generative adversarial network based on sequential channel-spatial attention modules (CSAM-GAN), a multimodal data integration and feature selection approach, for accomplishing prognostic stratification tasks in cancer. Sequential channel-spatial attention modules equipped with an encoder-decoder are applied for the input features of multimodal data to accurately refine selected features. A discriminator network was proposed to make the generator and discriminator learning in an adversarial way to accurately describe the complex heterogeneous information of multiple modal data. We conducted extensive experiments with various feature selection and classification methods and confirmed that the CSAM-GAN via the multilayer deep neural network (DNN) classifier outperformed these baseline methods on two different multimodal data sets with miRNA expression, mRNA expression and histopathological image data: lower-grade glioma and kidney renal clear cell carcinoma. The CSAM-GAN via the multilayer DNN classifier bridges the gap between heterogenous multimodal data and prognostic outcome prediction.


Asunto(s)
Carcinoma de Células Renales , Glioma , Neoplasias Renales , MicroARNs , Humanos , Pronóstico
3.
Brief Bioinform ; 24(6)2023 09 22.
Artículo en Inglés | MEDLINE | ID: mdl-37903416

RESUMEN

The emergence of single-cell RNA sequencing (scRNA-seq) technology has revolutionized the identification of cell types and the study of cellular states at a single-cell level. Despite its significant potential, scRNA-seq data analysis is plagued by the issue of missing values. Many existing imputation methods rely on simplistic data distribution assumptions while ignoring the intrinsic gene expression distribution specific to cells. This work presents a novel deep-learning model, named scMultiGAN, for scRNA-seq imputation, which utilizes multiple collaborative generative adversarial networks (GAN). Unlike traditional GAN-based imputation methods that generate missing values based on random noises, scMultiGAN employs a two-stage training process and utilizes multiple GANs to achieve cell-specific imputation. Experimental results show the efficacy of scMultiGAN in imputation accuracy, cell clustering, differential gene expression analysis and trajectory analysis, significantly outperforming existing state-of-the-art techniques. Additionally, scMultiGAN is scalable to large scRNA-seq datasets and consistently performs well across sequencing platforms. The scMultiGAN code is freely available at https://github.com/Galaxy8172/scMultiGAN.


Asunto(s)
Análisis de la Célula Individual , Transcriptoma , Análisis de la Célula Individual/métodos , Análisis por Conglomerados , Secuenciación del Exoma , Análisis de Datos , Análisis de Secuencia de ARN , Perfilación de la Expresión Génica
4.
Brief Bioinform ; 24(2)2023 03 19.
Artículo en Inglés | MEDLINE | ID: mdl-36733262

RESUMEN

Single-cell RNA sequencing (scRNA-seq) data are typically with a large number of missing values, which often results in the loss of critical gene signaling information and seriously limit the downstream analysis. Deep learning-based imputation methods often can better handle scRNA-seq data than shallow ones, but most of them do not consider the inherent relations between genes, and the expression of a gene is often regulated by other genes. Therefore, it is essential to impute scRNA-seq data by considering the regional gene-to-gene relations. We propose a novel model (named scGGAN) to impute scRNA-seq data that learns the gene-to-gene relations by Graph Convolutional Networks (GCN) and global scRNA-seq data distribution by Generative Adversarial Networks (GAN). scGGAN first leverages single-cell and bulk genomics data to explore inherent relations between genes and builds a more compact gene relation network to jointly capture the homogeneous and heterogeneous information. Then, it constructs a GCN-based GAN model to integrate the scRNA-seq, gene sequencing data and gene relation network for generating scRNA-seq data, and trains the model through adversarial learning. Finally, it utilizes data generated by the trained GCN-based GAN model to impute scRNA-seq data. Experiments on simulated and real scRNA-seq datasets show that scGGAN can effectively identify dropout events, recover the biologically meaningful expressions, determine subcellular states and types, improve the differential expression analysis and temporal dynamics analysis. Ablation experiments confirm that both the gene relation network and gene sequence data help the imputation of scRNA-seq data.


Asunto(s)
Análisis de Expresión Génica de una Sola Célula , Programas Informáticos , Análisis de Secuencia de ARN/métodos , Análisis de la Célula Individual/métodos , Genómica , Perfilación de la Expresión Génica
5.
BMC Genomics ; 25(1): 411, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38724911

RESUMEN

BACKGROUND: In recent years, there has been a growing interest in utilizing computational approaches to predict drug-target binding affinity, aiming to expedite the early drug discovery process. To address the limitations of experimental methods, such as cost and time, several machine learning-based techniques have been developed. However, these methods encounter certain challenges, including the limited availability of training data, reliance on human intervention for feature selection and engineering, and a lack of validation approaches for robust evaluation in real-life applications. RESULTS: To mitigate these limitations, in this study, we propose a method for drug-target binding affinity prediction based on deep convolutional generative adversarial networks. Additionally, we conducted a series of validation experiments and implemented adversarial control experiments using straw models. These experiments serve to demonstrate the robustness and efficacy of our predictive models. We conducted a comprehensive evaluation of our method by comparing it to baselines and state-of-the-art methods. Two recently updated datasets, namely the BindingDB and PDBBind, were used for this purpose. Our findings indicate that our method outperforms the alternative methods in terms of three performance measures when using warm-start data splitting settings. Moreover, when considering physiochemical-based cold-start data splitting settings, our method demonstrates superior predictive performance, particularly in terms of the concordance index. CONCLUSION: The results of our study affirm the practical value of our method and its superiority over alternative approaches in predicting drug-target binding affinity across multiple validation sets. This highlights the potential of our approach in accelerating drug repurposing efforts, facilitating novel drug discovery, and ultimately enhancing disease treatment. The data and source code for this study were deposited in the GitHub repository, https://github.com/mojtabaze7/DCGAN-DTA . Furthermore, the web server for our method is accessible at https://dcgan.shinyapps.io/bindingaffinity/ .


Asunto(s)
Descubrimiento de Drogas , Descubrimiento de Drogas/métodos , Biología Computacional/métodos , Humanos , Redes Neurales de la Computación , Unión Proteica , Aprendizaje Automático
6.
Am J Transplant ; 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38901561

RESUMEN

Generative artificial intelligence (AI), a subset of machine learning that creates new content based on training data, has witnessed tremendous advances in recent years. Practical applications have been identified in health care in general, and there is significant opportunity in transplant medicine for generative AI to simplify tasks in research, medical education, and clinical practice. In addition, patients stand to benefit from patient education that is more readily provided by generative AI applications. This review aims to catalyze the development and adoption of generative AI in transplantation by introducing basic AI and generative AI concepts to the transplant clinician and summarizing its current and potential applications within the field. We provide an overview of applications to the clinician, researcher, educator, and patient. We also highlight the challenges involved in bringing these applications to the bedside and need for ongoing refinement of generative AI applications to sustainably augment the transplantation field.

7.
Hum Brain Mapp ; 45(9): e26721, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38899549

RESUMEN

With the rise of open data, identifiability of individuals based on 3D renderings obtained from routine structural magnetic resonance imaging (MRI) scans of the head has become a growing privacy concern. To protect subject privacy, several algorithms have been developed to de-identify imaging data using blurring, defacing or refacing. Completely removing facial structures provides the best re-identification protection but can significantly impact post-processing steps, like brain morphometry. As an alternative, refacing methods that replace individual facial structures with generic templates have a lower effect on the geometry and intensity distribution of original scans, and are able to provide more consistent post-processing results by the price of higher re-identification risk and computational complexity. In the current study, we propose a novel method for anonymized face generation for defaced 3D T1-weighted scans based on a 3D conditional generative adversarial network. To evaluate the performance of the proposed de-identification tool, a comparative study was conducted between several existing defacing and refacing tools, with two different segmentation algorithms (FAST and Morphobox). The aim was to evaluate (i) impact on brain morphometry reproducibility, (ii) re-identification risk, (iii) balance between (i) and (ii), and (iv) the processing time. The proposed method takes 9 s for face generation and is suitable for recovering consistent post-processing results after defacing.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Masculino , Femenino , Redes Neurales de la Computación , Imagenología Tridimensional/métodos , Neuroimagen/métodos , Neuroimagen/normas , Anonimización de la Información , Adulto Joven , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Algoritmos
8.
Mod Pathol ; 37(1): 100369, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37890670

RESUMEN

Generative adversarial networks (GANs) have gained significant attention in the field of image synthesis, particularly in computer vision. GANs consist of a generative model and a discriminative model trained in an adversarial setting to generate realistic and novel data. In the context of image synthesis, the generator produces synthetic images, whereas the discriminator determines their authenticity by comparing them with real examples. Through iterative training, the generator allows the creation of images that are indistinguishable from real ones, leading to high-quality image generation. Considering their success in computer vision, GANs hold great potential for medical diagnostic applications. In the medical field, GANs can generate images of rare diseases, aid in learning, and be used as visualization tools. GANs can leverage unlabeled medical images, which are large in size, numerous in quantity, and challenging to annotate manually. GANs have demonstrated remarkable capabilities in image synthesis and have the potential to significantly impact digital histopathology. This review article focuses on the emerging use of GANs in digital histopathology, examining their applications and potential challenges. Histopathology plays a crucial role in disease diagnosis, and GANs can contribute by generating realistic microscopic images. However, ethical considerations arise because of the reliance on synthetic or pseudogenerated images. Therefore, the manuscript also explores the current limitations and highlights the ethical considerations associated with the use of this technology. In conclusion, digital histopathology has seen an emerging use of GANs for image enhancement, such as color (stain) normalization, virtual staining, and ink/marker removal. GANs offer significant potential in transforming digital pathology when applied to specific and narrow tasks (preprocessing enhancements). Evaluating data quality, addressing biases, protecting privacy, ensuring accountability and transparency, and developing regulation are imperative to ensure the ethical application of GANs.


Asunto(s)
Colorantes , Exactitud de los Datos , Humanos , Coloración y Etiquetado , Procesamiento de Imagen Asistido por Computador
9.
Brief Bioinform ; 23(4)2022 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-35830870

RESUMEN

We construct a protein-protein interaction (PPI) targeted drug-likeness dataset and propose a deep molecular generative framework to generate novel drug-likeness molecules from the features of the seed compounds. This framework gains inspiration from published molecular generative models, uses the key features associated with PPI inhibitors as input and develops deep molecular generative models for de novo molecular design of PPI inhibitors. For the first time, quantitative estimation index for compounds targeting PPI was applied to the evaluation of the molecular generation model for de novo design of PPI-targeted compounds. Our results estimated that the generated molecules had better PPI-targeted drug-likeness and drug-likeness. Additionally, our model also exhibits comparable performance to other several state-of-the-art molecule generation models. The generated molecules share chemical space with iPPI-DB inhibitors as demonstrated by chemical space analysis. The peptide characterization-oriented design of PPI inhibitors and the ligand-based design of PPI inhibitors are explored. Finally, we recommend that this framework will be an important step forward for the de novo design of PPI-targeted therapeutics.


Asunto(s)
Diseño de Fármacos , Redes Neurales de la Computación , Ligandos , Modelos Moleculares
10.
Hum Reprod ; 39(6): 1197-1207, 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38600621

RESUMEN

STUDY QUESTION: Can generative artificial intelligence (AI) models produce high-fidelity images of human blastocysts? SUMMARY ANSWER: Generative AI models exhibit the capability to generate high-fidelity human blastocyst images, thereby providing substantial training datasets crucial for the development of robust AI models. WHAT IS KNOWN ALREADY: The integration of AI into IVF procedures holds the potential to enhance objectivity and automate embryo selection for transfer. However, the effectiveness of AI is limited by data scarcity and ethical concerns related to patient data privacy. Generative adversarial networks (GAN) have emerged as a promising approach to alleviate data limitations by generating synthetic data that closely approximate real images. STUDY DESIGN, SIZE, DURATION: Blastocyst images were included as training data from a public dataset of time-lapse microscopy (TLM) videos (n = 136). A style-based GAN was fine-tuned as the generative model. PARTICIPANTS/MATERIALS, SETTING, METHODS: We curated a total of 972 blastocyst images as training data, where frames were captured within the time window of 110-120 h post-insemination at 1-h intervals from TLM videos. We configured the style-based GAN model with data augmentation (AUG) and pretrained weights (Pretrained-T: with translation equivariance; Pretrained-R: with translation and rotation equivariance) to compare their optimization on image synthesis. We then applied quantitative metrics including Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) to assess the quality and fidelity of the generated images. Subsequently, we evaluated qualitative performance by measuring the intelligence behavior of the model through the visual Turing test. To this end, 60 individuals with diverse backgrounds and expertise in clinical embryology and IVF evaluated the quality of synthetic embryo images. MAIN RESULTS AND THE ROLE OF CHANCE: During the training process, we observed consistent improvement of image quality that was measured by FID and KID scores. Pretrained and AUG + Pretrained initiated with remarkably lower FID and KID values compared to both Baseline and AUG + Baseline models. Following 5000 training iterations, the AUG + Pretrained-R model showed the highest performance of the evaluated five configurations with FID and KID scores of 15.2 and 0.004, respectively. Subsequently, we carried out the visual Turing test, such that IVF embryologists, IVF laboratory technicians, and non-experts evaluated the synthetic blastocyst-stage embryo images and obtained similar performance in specificity with marginal differences in accuracy and sensitivity. LIMITATIONS, REASONS FOR CAUTION: In this study, we primarily focused the training data on blastocyst images as IVF embryos are primarily assessed in blastocyst stage. However, generation of an array of images in different preimplantation stages offers further insights into the development of preimplantation embryos and IVF success. In addition, we resized training images to a resolution of 256 × 256 pixels to moderate the computational costs of training the style-based GAN models. Further research is needed to involve a more extensive and diverse dataset from the formation of the zygote to the blastocyst stage, e.g. video generation, and the use of improved image resolution to facilitate the development of comprehensive AI algorithms and to produce higher-quality images. WIDER IMPLICATIONS OF THE FINDINGS: Generative AI models hold promising potential in generating high-fidelity human blastocyst images, which allows the development of robust AI models as it can provide sufficient training datasets while safeguarding patient data privacy. Additionally, this may help to produce sufficient embryo imaging training data with different (rare) abnormal features, such as embryonic arrest, tripolar cell division to avoid class imbalances and reach to even datasets. Thus, generative models may offer a compelling opportunity to transform embryo selection procedures and substantially enhance IVF outcomes. STUDY FUNDING/COMPETING INTEREST(S): This study was supported by a Horizon 2020 innovation grant (ERIN, grant no. EU952516) and a Horizon Europe grant (NESTOR, grant no. 101120075) of the European Commission to A.S. and M.Z.E., the Estonian Research Council (grant no. PRG1076) to A.S., and the EVA (Erfelijkheid Voortplanting & Aanleg) specialty program (grant no. KP111513) of Maastricht University Medical Centre (MUMC+) to M.Z.E. TRIAL REGISTRATION NUMBER: Not applicable.


Asunto(s)
Inteligencia Artificial , Blastocisto , Humanos , Imagen de Lapso de Tiempo/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Fertilización In Vitro/métodos , Femenino
11.
Eur J Nucl Med Mol Imaging ; 51(9): 2532-2546, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38696130

RESUMEN

PURPOSE: To improve reproducibility and predictive performance of PET radiomic features in multicentric studies by cycle-consistent generative adversarial network (GAN) harmonization approaches. METHODS: GAN-harmonization was developed to harmonize whole-body PET scans to perform image style and texture translation between different centers and scanners. GAN-harmonization was evaluated by application to two retrospectively collected open datasets and different tasks. First, GAN-harmonization was performed on a dual-center lung cancer cohort (127 female, 138 male) where the reproducibility of radiomic features in healthy liver tissue was evaluated. Second, GAN-harmonization was applied to a head and neck cancer cohort (43 female, 154 male) acquired from three centers. Here, the clinical impact of GAN-harmonization was analyzed by predicting the development of distant metastases using a logistic regression model incorporating first-order statistics and texture features from baseline 18F-FDG PET before and after harmonization. RESULTS: Image quality remained high (structural similarity: left kidney ≥ 0.800, right kidney ≥ 0.806, liver ≥ 0.780, lung ≥ 0.838, spleen ≥ 0.793, whole-body ≥ 0.832) after image harmonization across all utilized datasets. Using GAN-harmonization, inter-site reproducibility of radiomic features in healthy liver tissue increased at least by ≥ 5 ± 14% (first-order), ≥ 16 ± 7% (GLCM), ≥ 19 ± 5% (GLRLM), ≥ 16 ± 8% (GLSZM), ≥ 17 ± 6% (GLDM), and ≥ 23 ± 14% (NGTDM). In the head and neck cancer cohort, the outcome prediction improved from AUC 0.68 (95% CI 0.66-0.71) to AUC 0.73 (0.71-0.75) by application of GAN-harmonization. CONCLUSIONS: GANs are capable of performing image harmonization and increase reproducibility and predictive performance of radiomic features derived from different centers and scanners.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Humanos , Femenino , Masculino , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/normas , Tomografía de Emisión de Positrones/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Persona de Mediana Edad , Reproducibilidad de los Resultados , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Estudios Retrospectivos , Fluorodesoxiglucosa F18 , Anciano
12.
J Magn Reson Imaging ; 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38563660

RESUMEN

BACKGROUND: The modified Look-Locker inversion recovery (MOLLI) sequence is commonly used for myocardial T1 mapping. However, it acquires images with different inversion times, which causes difficulty in motion correction for respiratory-induced misregistration to a given target image. HYPOTHESIS: Using a generative adversarial network (GAN) to produce virtual MOLLI images with consistent heart positions can reduce respiratory-induced misregistration of MOLLI datasets. STUDY TYPE: Retrospective. POPULATION: 1071 MOLLI datasets from 392 human participants. FIELD STRENGTH/SEQUENCE: Modified Look-Locker inversion recovery sequence at 3 T. ASSESSMENT: A GAN model with a single inversion time image as input was trained to generate virtual MOLLI target (VMT) images at different inversion times which were subsequently used in an image registration algorithm. Four VMT models were investigated and the best performing model compared with the standard vendor-provided motion correction (MOCO) technique. STATISTICAL TESTS: The effectiveness of the motion correction technique was assessed using the fitting quality index (FQI), mutual information (MI), and Dice coefficients of motion-corrected images, plus subjective quality evaluation of T1 maps by three independent readers using Likert score. Wilcoxon signed-rank test with Bonferroni correction for multiple comparison. Significance levels were defined as P < 0.01 for highly significant differences and P < 0.05 for significant differences. RESULTS: The best performing VMT model with iterative registration demonstrated significantly better performance (FQI 0.88 ± 0.03, MI 1.78 ± 0.20, Dice 0.84 ± 0.23, quality score 2.26 ± 0.95) compared to other approaches, including the vendor-provided MOCO method (FQI 0.86 ± 0.04, MI 1.69 ± 0.25, Dice 0.80 ± 0.27, quality score 2.16 ± 1.01). DATA CONCLUSION: Our GAN model generating VMT images improved motion correction, which may assist reliable T1 mapping in the presence of respiratory motion. Its robust performance, even with considerable respiratory-induced heart displacements, may be beneficial for patients with difficulties in breath-holding. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: Stage 1.

13.
J Theor Biol ; 577: 111636, 2024 01 21.
Artículo en Inglés | MEDLINE | ID: mdl-37944593

RESUMEN

Gene expression analysis is valuable for cancer type classification and identifying diverse cancer phenotypes. The latest high-throughput RNA sequencing devices have enabled access to large volumes of gene expression data. However, we face several challenges, such as data security and privacy, when we develop machine learning-based classifiers for categorizing cancer types with these datasets. To address these issues, we propose IP3G (Intelligent Phenotype-detection and Gene expression profile Generation with Generative adversarial network), a model based on Generative Adversarial Networks. IP3G tackles two major problems: augmenting gene expression data and unsupervised phenotype discovery. By converting gene expression profiles into 2-Dimensional images and leveraging IP3G, we generate new profiles for specific phenotypes. IP3G learns disentangled representations of gene expression patterns and identifies phenotypes without labeled data. We improve the objective function of the GAN used in IP3G by employing the earth mover distance and a novel mutual information function. IP3G outperforms clustering methods like k-Means, DBSCAN, and GMM in unsupervised phenotype discovery, while also surpassing SVM and CNN classification accuracy by up to 6% through gene expression profile augmentation. The source code for the developed IP3G is accessible to the public on GitHub.


Asunto(s)
Neoplasias , Transcriptoma , Humanos , Perfilación de la Expresión Génica , Análisis por Conglomerados , Fenotipo , Neoplasias/genética
14.
BMC Med Imaging ; 24(1): 151, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38890572

RESUMEN

BACKGROUND: Abdominal CT scans are vital for diagnosing abdominal diseases but have limitations in tissue analysis and soft tissue detection. Dual-energy CT (DECT) can improve these issues by offering low keV virtual monoenergetic images (VMI), enhancing lesion detection and tissue characterization. However, its cost limits widespread use. PURPOSE: To develop a model that converts conventional images (CI) into generative virtual monoenergetic images at 40 keV (Gen-VMI40keV) of the upper abdomen CT scan. METHODS: Totally 444 patients who underwent upper abdominal spectral contrast-enhanced CT were enrolled and assigned to the training and validation datasets (7:3). Then, 40-keV portal-vein virtual monoenergetic (VMI40keV) and CI, generated from spectral CT scans, served as target and source images. These images were employed to build and train a CI-VMI40keV model. Indexes such as Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) were utilized to determine the best generator mode. An additional 198 cases were divided into three test groups, including Group 1 (58 cases with visible abnormalities), Group 2 (40 cases with hepatocellular carcinoma [HCC]) and Group 3 (100 cases from a publicly available HCC dataset). Both subjective and objective evaluations were performed. Comparisons, correlation analyses and Bland-Altman plot analyses were performed. RESULTS: The 192nd iteration produced the best generator mode (lower MAE and highest PSNR and SSIM). In the Test groups (1 and 2), both VMI40keV and Gen-VMI40keV significantly improved CT values, as well as SNR and CNR, for all organs compared to CI. Significant positive correlations for objective indexes were found between Gen-VMI40keV and VMI40keV in various organs and lesions. Bland-Altman analysis showed that the differences between both imaging types mostly fell within the 95% confidence interval. Pearson's and Spearman's correlation coefficients for objective scores between Gen-VMI40keV and VMI40keV in Groups 1 and 2 ranged from 0.645 to 0.980. In Group 3, Gen-VMI40keV yielded significantly higher CT values for HCC (220.5HU vs. 109.1HU) and liver (220.0HU vs. 112.8HU) compared to CI (p < 0.01). The CNR for HCC/liver was also significantly higher in Gen-VMI40keV (2.0 vs. 1.2) than in CI (p < 0.01). Additionally, Gen-VMI40keV was subjectively evaluated to have a higher image quality compared to CI. CONCLUSION: CI-VMI40keV model can generate Gen-VMI40keV from conventional CT scan, closely resembling VMI40keV.


Asunto(s)
Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Femenino , Masculino , Persona de Mediana Edad , Radiografía Abdominal/métodos , Anciano , Adulto , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Relación Señal-Ruido , Imagen Radiográfica por Emisión de Doble Fotón/métodos , Carcinoma Hepatocelular/diagnóstico por imagen , Anciano de 80 o más Años , Medios de Contraste
15.
BMC Med Imaging ; 24(1): 186, 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39054419

RESUMEN

Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that affects an individual's behavior, speech, and social interaction. Early and accurate diagnosis of ASD is pivotal for successful intervention. The limited availability of large datasets for neuroimaging investigations, however, poses a significant challenge to the timely and precise identification of ASD. To address this problem, we propose a breakthrough approach, GARL, for ASD diagnosis using neuroimaging data. GARL innovatively integrates the power of GANs and Deep Q-Learning to augment limited datasets and enhance diagnostic precision. We utilized the Autistic Brain Imaging Data Exchange (ABIDE) I and II datasets and employed a GAN to expand these datasets, creating a more robust and diversified dataset for analysis. This approach not only captures the underlying sample distribution within ABIDE I and II but also employs deep reinforcement learning for continuous self-improvement, significantly enhancing the capability of the model to generalize and adapt. Our experimental results confirmed that GAN-based data augmentation effectively improved the performance of all prediction models on both datasets, with the combination of InfoGAN and DQN's GARL yielding the most notable improvement.


Asunto(s)
Trastorno del Espectro Autista , Aprendizaje Profundo , Neuroimagen , Humanos , Trastorno del Espectro Autista/diagnóstico por imagen , Neuroimagen/métodos , Niño , Redes Neurales de la Computación , Masculino , Encéfalo/diagnóstico por imagen
16.
Proc Natl Acad Sci U S A ; 118(31)2021 08 03.
Artículo en Inglés | MEDLINE | ID: mdl-34330823

RESUMEN

We present APAC-Net, an alternating population and agent control neural network for solving stochastic mean-field games (MFGs). Our algorithm is geared toward high-dimensional instances of MFGs that are not approachable with existing solution methods. We achieve this in two steps. First, we take advantage of the underlying variational primal-dual structure that MFGs exhibit and phrase it as a convex-concave saddle-point problem. Second, we parameterize the value and density functions by two neural networks, respectively. By phrasing the problem in this manner, solving the MFG can be interpreted as a special case of training a generative adversarial network (GAN). We show the potential of our method on up to 100-dimensional MFG problems.

17.
Microsc Microanal ; 30(2): 278-293, 2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38684097

RESUMEN

Recent advances in machine learning (ML) have highlighted a novel challenge concerning the quality and quantity of data required to effectively train algorithms in supervised ML procedures. This article introduces a data augmentation (DA) strategy for electron energy loss spectroscopy (EELS) data, employing generative adversarial networks (GANs). We present an innovative approach, called the data augmentation generative adversarial network (DAG), which facilitates data generation from a very limited number of spectra, around 100. Throughout this study, we explore the optimal configuration for GANs to produce realistic spectra. Notably, our DAG generates realistic spectra, and the spectra produced by the generator are successfully used in real-world applications to train classifiers based on artificial neural networks (ANNs) and support vector machines (SVMs) that have been successful in classifying experimental EEL spectra.

18.
J Appl Clin Med Phys ; 25(1): e14212, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37985163

RESUMEN

PURPOSE: Lung tumor tracking during stereotactic radiotherapy with the CyberKnife can misrecognize tumor location under conditions where similar patterns exist in the search area. This study aimed to develop a technique for bone signal suppression during kV-x-ray imaging. METHODS: Paired CT images were created with or without bony structures using a 4D extended cardiac-torso phantom (XCAT phantom) in 56 cases. Subsequently, 3020 2D x-ray images were generated. Images with bone were input into cycle-consistent adversarial network (CycleGAN) and the bone suppressed images on the XCAT phantom (BSIphantom ) were created. They were then compared to images without bone using the structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR). Next, 1000 non-simulated treatment images from real cases were input into the training model, and bone-suppressed images of the patient (BSIpatient ) were created. Zero means normalized cross correlation (ZNCC) by template matching between each of the actual treatment images and BSIpatient were calculated. RESULTS: BSIphantom values were compared to their paired images without bone of the XCAT phantom test data; SSIM and PSNR were 0.90 ± 0.06 and 24.54 ± 4.48, respectively. It was visually confirmed that only bone was selectively suppressed without significantly affecting tumor visualization. The ZNCC values of the actual treatment images and BSIpatient were 0.763 ± 0.136 and 0.773 ± 0.143, respectively. The BSIpatient showed improved recognition accuracy over the actual treatment images. CONCLUSIONS: The proposed bone suppression imaging technique based on CycleGAN improves image recognition, making it possible to achieve highly accurate motion tracking irradiation.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Neoplasias Pulmonares/cirugía , Movimiento (Física) , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos
19.
J Arthroplasty ; 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38944061

RESUMEN

BACKGROUND: The purpose of this study was to reconstruct three-dimensional (3D) computed tomography (CT) images from single anteroposterior (AP) postoperative total hip arthroplasty (THA) X-ray images using a deep learning algorithm known as generative adversarial networks (GANs) and to validate the accuracy of cup angle measurement on GAN-generated CT. METHODS: We used two GAN-based models, CycleGAN and X2CT-GAN, to generate 3D CT images from X-ray images of 386 patients who underwent primary THAs using a cementless cup. The training dataset consisted of 522 CT images and 2,282 X-ray images. The image quality was validated using the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM). The cup anteversion and inclination measurements on the GAN-generated CT images were compared with the actual CT measurements. Statistical analyses of absolute measurement errors were performed using Mann-Whitney U tests and nonlinear regression analyses. RESULTS: The study successfully achieved 3D reconstruction from single AP postoperative THA X-ray images using GANs, exhibiting excellent PSNR (37.40) and SSIM (0.74). The median absolute difference in radiographic anteversion (RA) was 3.45° and the median absolute difference in radiographic inclination (RI) was 3.25°, respectively. Absolute measurement errors tended to be larger in cases with cup malposition than in those with optimal cup orientation. CONCLUSION: This study demonstrates the potential of GANs for 3D reconstruction from single AP postoperative THA X-ray images to evaluate cup orientation. Further investigation and refinement of this model are required to improve its performance.

20.
Sensors (Basel) ; 24(2)2024 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-38276333

RESUMEN

Wireless physical layer authentication has emerged as a promising approach to wireless security. The topic of wireless node classification and recognition has experienced significant advancements due to the rapid development of deep learning techniques. The potential of using deep learning to address wireless security issues should not be overlooked due to its considerable capabilities. Nevertheless, the utilization of this approach in the classification of wireless nodes is impeded by the lack of available datasets. In this study, we provide two models based on a data-driven approach. First, we used generative adversarial networks to design an automated model for data augmentation. Second, we applied a convolutional neural network to classify wireless nodes for a wireless physical layer authentication model. To verify the effectiveness of the proposed model, we assessed our results using an original dataset as a baseline and a generated synthetic dataset. The findings indicate an improvement of approximately 19% in classification accuracy rate.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda