Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 142
Filtrar
1.
Quant Imaging Med Surg ; 14(8): 5571-5590, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39144020

RESUMEN

Background: Low-dose computed tomography (LDCT) is a diagnostic imaging technique designed to minimize radiation exposure to the patient. However, this reduction in radiation may compromise computed tomography (CT) image quality, adversely impacting clinical diagnoses. Various advanced LDCT methods have emerged to mitigate this challenge, relying on well-matched LDCT and normal-dose CT (NDCT) image pairs for training. Nevertheless, these methods often face difficulties in distinguishing image details from nonuniformly distributed noise, limiting their denoising efficacy. Additionally, acquiring suitably paired datasets in the medical domain poses challenges, further constraining their applicability. Hence, the objective of this study was to develop an innovative denoising framework for LDCT images employing unpaired data. Methods: In this paper, we propose a LDCT denoising network (DNCNN) that alleviates the need for aligning LDCT and NDCT images. Our approach employs generative adversarial networks (GANs) to learn and model the noise present in LDCT images, establishing a mapping from the pseudo-LDCT to the actual NDCT domain without the need for paired CT images. Results: Within the domain of weakly supervised methods, our proposed model exhibited superior objective metrics on the simulated dataset when compared to CycleGAN and selective kernel-based cycle-consistent GAN (SKFCycleGAN): the peak signal-to-noise ratio (PSNR) was 43.9441, the structural similarity index measure (SSIM) was 0.9660, and the visual information fidelity (VIF) was 0.7707. In the clinical dataset, we conducted a visual effect analysis by observing various tissues through different observation windows. Our proposed method achieved a no-reference structural sharpness (NRSS) value of 0.6171, which was closest to that of the NDCT images (NRSS =0.6049), demonstrating its superiority over other denoising techniques in preserving details, maintaining structural integrity, and enhancing edge contrast. Conclusions: Through extensive experiments on both simulated and clinical datasets, we demonstrated the superior efficacy of our proposed method in terms of denoising quality and quantity. Our method exhibits superiority over both supervised techniques, including block-matching and 3D filtering (BM3D), residual encoder-decoder convolutional neural network (RED-CNN), and Wasserstein generative adversarial network-VGG (WGAN-VGG), and over weakly supervised approaches, including CycleGAN and SKFCycleGAN.

2.
Mod Pathol ; : 100591, 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39147031

RESUMEN

Despite recent advances, the adoption of computer vision methods into clinical and commercial applications has been hampered by the limited availability of accurate ground truth tissue annotations required to train robust supervised models. Generating such ground truth can be accelerated by annotating tissue molecularly using immunofluorescence staining (IF) and mapping these annotations to a post-IF H&E (terminal H&E). Mapping the annotations between the IF and the terminal H&E increases both the scale and accuracy by which ground truth could be generated. However, discrepancies between terminal H&E and conventional H&E caused by IF tissue processing have limited this implementation. We sought to overcome this challenge and achieve compatibility between these parallel modalities using synthetic image generation, in which a cycle-consistent generative adversarial network (CycleGAN) was applied to transfer the appearance of conventional H&E such that it emulates the terminal H&E. These synthetic emulations allowed us to train a deep learning (DL) model for the segmentation of epithelium in the terminal H&E that could be validated against the IF staining of epithelial-based cytokeratins. The combination of this segmentation model with the CycleGAN stain transfer model enabled performative epithelium segmentation in conventional H&E images. The approach demonstrates that the training of accurate segmentation models for the breadth of conventional H&E data can be executed free of human-expert annotations by leveraging molecular annotation strategies such as IF, so long as the tissue impacts of the molecular annotation protocol are captured by generative models that can be deployed prior to the segmentation process.

3.
Math Biosci Eng ; 21(7): 6608-6630, 2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39176411

RESUMEN

Feature representations with rich topic information can greatly improve the performance of story segmentation tasks. VAEGAN offers distinct advantages in feature learning by combining variational autoencoder (VAE) and generative adversarial network (GAN), which not only captures intricate data representations through VAE's probabilistic encoding and decoding mechanism but also enhances feature diversity and quality via GAN's adversarial training. To better learn topical domain representation, we used a topical classifier to supervise the training process of VAEGAN. Based on the learned feature, a segmentor splits the document into shorter ones with different topics. Hidden Markov model (HMM) is a popular approach for story segmentation, in which stories are viewed as instances of topics (hidden states). The number of states has to be set manually but it is often unknown in real scenarios. To solve this problem, we proposed an infinite HMM (IHMM) approach which utilized an HDP prior on transition matrices over countably infinite state spaces to automatically infer the state's number from the data. Given a running text, a Blocked Gibbis sampler labeled the states with topic classes. The position where the topic changes was a story boundary. Experimental results on the TDT2 corpus demonstrated that the proposed topical VAEGAN-IHMM approach was significantly better than the traditional HMM method in story segmentation tasks and achieved state-of-the-art performance.

4.
Stud Health Technol Inform ; 316: 1145-1150, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176583

RESUMEN

Advances in general-purpose computers have enabled the generation of high-quality synthetic medical images that human eyes cannot differ between real and AI-generated images. To analyse the efficacy of the generated medical images, this study proposed a modified VGG16-based algorithm to recognise AI-generated medical images. Initially, 10,000 synthetic medical skin lesion images were generated using a Generative Adversarial Network (GAN), providing a set of images for comparison to real images. Then, an enhanced VGG16-based algorithm has been developed to classify real images vs AI-generated images. Following hyperparameters tuning and training, the optimal approach can classify the images with 99.82% accuracy. Multiple other evaluations have been used to evaluate the efficacy of the proposed network. The complete dataset used in this study is available online to the research community for future research.


Asunto(s)
Aprendizaje Profundo , Humanos , Algoritmos , Enfermedades de la Piel/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias Cutáneas/diagnóstico por imagen
5.
Med Image Anal ; 98: 103306, 2024 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-39163786

RESUMEN

Positron emission tomography (PET) imaging is widely used in medical imaging for analyzing neurological disorders and related brain diseases. Usually, full-dose imaging for PET ensures image quality but raises concerns about potential health risks of radiation exposure. The contradiction between reducing radiation exposure and maintaining diagnostic performance can be effectively addressed by reconstructing low-dose PET (L-PET) images to the same high-quality as full-dose (F-PET). This paper introduces the Multi Pareto Generative Adversarial Network (MPGAN) to achieve 3D end-to-end denoising for the L-PET images of human brain. MPGAN consists of two key modules: the diffused multi-round cascade generator (GDmc) and the dynamic Pareto-efficient discriminator (DPed), both of which play a zero-sum game for n(n∈1,2,3) rounds to ensure the quality of synthesized F-PET images. The Pareto-efficient dynamic discrimination process is introduced in DPed to adaptively adjust the weights of sub-discriminators for improved discrimination output. We validated the performance of MPGAN using three datasets, including two independent datasets and one mixed dataset, and compared it with 12 recent competing models. Experimental results indicate that the proposed MPGAN provides an effective solution for 3D end-to-end denoising of L-PET images of the human brain, which meets clinical standards and achieves state-of-the-art performance on commonly used metrics.

6.
PeerJ Comput Sci ; 10: e2184, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39145238

RESUMEN

Transforming optical facial images into sketches while preserving realism and facial features poses a significant challenge. The current methods that rely on paired training data are costly and resource-intensive. Furthermore, they often fail to capture the intricate features of faces, resulting in substandard sketch generation. To address these challenges, we propose the novel hierarchical contrast generative adversarial network (HCGAN). Firstly, HCGAN consists of a global sketch synthesis module that generates sketches with well-defined global features and a local sketch refinement module that enhances the ability to extract features in critical areas. Secondly, we introduce local refinement loss based on the local sketch refinement module, refining sketches at a granular level. Finally, we propose an association strategy called "warmup-epoch" and local consistency loss between the two modules to ensure HCGAN is effectively optimized. Evaluations of the CUFS and SKSF-A datasets demonstrate that our method produces high-quality sketches and outperforms existing state-of-the-art methods in terms of fidelity and realism. Compared to the current state-of-the-art methods, HCGAN reduces FID by 12.6941, 4.9124, and 9.0316 on three datasets of CUFS, respectively, and by 7.4679 on the SKSF-A dataset. Additionally, it obtained optimal scores for content fidelity (CF), global effects (GE), and local patterns (LP). The proposed HCGAN model provides a promising solution for realistic sketch synthesis under unpaired data training.

7.
PeerJ Comput Sci ; 10: e2064, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39145246

RESUMEN

Background: Medical imaging datasets frequently encounter a data imbalance issue, where the majority of pixels correspond to healthy regions, and the minority belong to affected regions. This uneven distribution of pixels exacerbates the challenges associated with computer-aided diagnosis. The networks trained with imbalanced data tends to exhibit bias toward majority classes, often demonstrate high precision but low sensitivity. Method: We have designed a new network based on adversarial learning namely conditional contrastive generative adversarial network (CCGAN) to tackle the problem of class imbalancing in a highly imbalancing MRI dataset. The proposed model has three new components: (1) class-specific attention, (2) region rebalancing module (RRM) and supervised contrastive-based learning network (SCoLN). The class-specific attention focuses on more discriminative areas of the input representation, capturing more relevant features. The RRM promotes a more balanced distribution of features across various regions of the input representation, ensuring a more equitable segmentation process. The generator of the CCGAN learns pixel-level segmentation by receiving feedback from the SCoLN based on the true negative and true positive maps. This process ensures that final semantic segmentation not only addresses imbalanced data issues but also enhances classification accuracy. Results: The proposed model has shown state-of-art-performance on five highly imbalance medical image segmentation datasets. Therefore, the suggested model holds significant potential for application in medical diagnosis, in cases characterized by highly imbalanced data distributions. The CCGAN achieved the highest scores in terms of dice similarity coefficient (DSC) on various datasets: 0.965 ± 0.012 for BUS2017, 0.896 ± 0.091 for DDTI, 0.786 ± 0.046 for LiTS MICCAI 2017, 0.712 ± 1.5 for the ATLAS dataset, and 0.877 ± 1.2 for the BRATS 2015 dataset. DeepLab-V3 follows closely, securing the second-best position with DSC scores of 0.948 ± 0.010 for BUS2017, 0.895 ± 0.014 for DDTI, 0.763 ± 0.044 for LiTS MICCAI 2017, 0.696 ± 1.1 for the ATLAS dataset, and 0.846 ± 1.4 for the BRATS 2015 dataset.

8.
IEEE Access ; 12: 83169-83182, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39148927

RESUMEN

Game theory-inspired deep learning using a generative adversarial network provides an environment to competitively interact and accomplish a goal. In the context of medical imaging, most work has focused on achieving single tasks such as improving image resolution, segmenting images, and correcting motion artifacts. We developed a dual-objective adversarial learning framework that simultaneously 1) reconstructs higher quality brain magnetic resonance images (MRIs) that 2) retain disease-specific imaging features critical for predicting progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD). We obtained 3-Tesla, T1-weighted brain MRIs of participants from the Alzheimer's Disease Neuroimaging Initiative (ADNI, N=342) and the National Alzheimer's Coordinating Center (NACC, N = 190) datasets. We simulated MRIs with missing data by removing 50% of sagittal slices from the original scans (i.e., diced scans). The generator was trained to reconstruct brain MRIs using the diced scans as input. We introduced a classifier into the GAN architecture to discriminate between stable (i.e., sMCI) and progressive MCI (i.e., pMCI) based on the generated images to facilitate encoding of disease-related information during reconstruction. The framework was trained using ADNI data and externally validated on NACC data. In the NACC cohort, generated images had better image quality than the diced scans (Structural similarity (SSIM) index: 0.553 ± 0.116 versus 0.348 ± 0.108). Furthermore, a classifier utilizing the generated images distinguished pMCI from sMCI more accurately than with the diced scans (F1-score: 0.634 ± 0.019 versus 0.573 ± 0.028). Competitive deep learning has potential to facilitate disease-oriented image reconstruction in those at risk of developing Alzheimer's disease.

9.
Spectrochim Acta A Mol Biomol Spectrosc ; 324: 124968, 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39153348

RESUMEN

Ultraviolet-visible (UV-Vis) absorption spectroscopy, due to its high sensitivity and capability for real-time online monitoring, is one of the most promising tools for the rapid identification of external water in rainwater pipe networks. However, difficulties in obtaining actual samples lead to insufficient real samples, and the complex composition of wastewater can affect the accurate traceability analysis of external water in rainwater pipe networks. In this study, a new method for identifying external water in rainwater pipe networks with a small number of samples is proposed. In this method, the Generative Adversarial Network (GAN) algorithm was initially used to generate spectral data from the absorption spectra of water samples; subsequently, the multiplicative scatter correction (MSC) algorithm was applied to process the UV-Vis absorption spectra of different types of water samples; following this, the Variational Mode Decomposition (VMD) algorithm was employed to decompose and recombine the spectra after MSC; and finally, the long short-term memory (LSTM) algorithm was used to establish the identification model between the recombined spectra and the water source types, and to determine the optimal number of decomposed spectra K. The research results show that when the number of decomposed spectra K is 5, the identification accuracy for different sources of domestic sewage, surface water, and industrial wastewater is the highest, with an overall accuracy of 98.81%. Additionally, the performance of this method was validated by mixed water samples (combinations of rainwater and domestic sewage, rainwater and surface water, and rainwater and industrial wastewater). The results indicate that the accuracy of the proposed method in identifying the source of external water in rainwater reaches 98.99%, with detection time within 10 s. Therefore, the proposed method can become a potential approach for rapid identification and traceability analysis of external water in rainwater pipe networks.

10.
Interact J Med Res ; 13: e53672, 2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39133916

RESUMEN

BACKGROUND: Mental disorders have ranked among the top 10 prevalent causes of burden on a global scale. Generative artificial intelligence (GAI) has emerged as a promising and innovative technological advancement that has significant potential in the field of mental health care. Nevertheless, there is a scarcity of research dedicated to examining and understanding the application landscape of GAI within this domain. OBJECTIVE: This review aims to inform the current state of GAI knowledge and identify its key uses in the mental health domain by consolidating relevant literature. METHODS: Records were searched within 8 reputable sources including Web of Science, PubMed, IEEE Xplore, medRxiv, bioRxiv, Google Scholar, CNKI and Wanfang databases between 2013 and 2023. Our focus was on original, empirical research with either English or Chinese publications that use GAI technologies to benefit mental health. For an exhaustive search, we also checked the studies cited by relevant literature. Two reviewers were responsible for the data selection process, and all the extracted data were synthesized and summarized for brief and in-depth analyses depending on the GAI approaches used (traditional retrieval and rule-based techniques vs advanced GAI techniques). RESULTS: In this review of 144 articles, 44 (30.6%) met the inclusion criteria for detailed analysis. Six key uses of advanced GAI emerged: mental disorder detection, counseling support, therapeutic application, clinical training, clinical decision-making support, and goal-driven optimization. Advanced GAI systems have been mainly focused on therapeutic applications (n=19, 43%) and counseling support (n=13, 30%), with clinical training being the least common. Most studies (n=28, 64%) focused broadly on mental health, while specific conditions such as anxiety (n=1, 2%), bipolar disorder (n=2, 5%), eating disorders (n=1, 2%), posttraumatic stress disorder (n=2, 5%), and schizophrenia (n=1, 2%) received limited attention. Despite prevalent use, the efficacy of ChatGPT in the detection of mental disorders remains insufficient. In addition, 100 articles on traditional GAI approaches were found, indicating diverse areas where advanced GAI could enhance mental health care. CONCLUSIONS: This study provides a comprehensive overview of the use of GAI in mental health care, which serves as a valuable guide for future research, practical applications, and policy development in this domain. While GAI demonstrates promise in augmenting mental health care services, its inherent limitations emphasize its role as a supplementary tool rather than a replacement for trained mental health providers. A conscientious and ethical integration of GAI techniques is necessary, ensuring a balanced approach that maximizes benefits while mitigating potential challenges in mental health care practices.

11.
Surg Endosc ; 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39138679

RESUMEN

BACKGROUND: Postoperative hypoparathyroidism is a major complication of thyroidectomy, occurring when the parathyroid glands are inadvertently damaged during surgery. Although intraoperative images are rarely used to train artificial intelligence (AI) because of its complex nature, AI may be trained to intraoperatively detect parathyroid glands using various augmentation methods. The purpose of this study was to train an effective AI model to detect parathyroid glands during thyroidectomy. METHODS: Video clips of the parathyroid gland were collected during thyroid lobectomy procedures. Confirmed parathyroid images were used to train three types of datasets according to augmentation status: baseline, geometric transformation, and generative adversarial network-based image inpainting. The primary outcome was the average precision of the performance of AI in detecting parathyroid glands. RESULTS: 152 Fine-needle aspiration-confirmed parathyroid gland images were acquired from 150 patients who underwent unilateral lobectomy. The average precision of the AI model in detecting parathyroid glands based on baseline data was 77%. This performance was enhanced by applying both geometric transformation and image inpainting augmentation methods, with the geometric transformation data augmentation dataset showing a higher average precision (79%) than the image inpainting model (78.6%). When this model was subjected to external validation using a completely different thyroidectomy approach, the image inpainting method was more effective (46%) than both the geometric transformation (37%) and baseline (33%) methods. CONCLUSION: This AI model was found to be an effective and generalizable tool in the intraoperative identification of parathyroid glands during thyroidectomy, especially when aided by appropriate augmentation methods. Additional studies comparing model performance and surgeon identification, however, are needed to assess the true clinical relevance of this AI model.

12.
Sensors (Basel) ; 24(15)2024 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-39123927

RESUMEN

The transmission environment of underwater wireless sensor networks is open, and important transmission data can be easily intercepted, interfered with, and tampered with by malicious nodes. Malicious nodes can be mixed in the network and are difficult to distinguish, especially in time-varying underwater environments. To address this issue, this article proposes a GAN-based trusted routing algorithm (GTR). GTR defines the trust feature attributes and trust evaluation matrix of underwater network nodes, constructs the trust evaluation model based on a generative adversarial network (GAN), and achieves malicious node detection by establishing a trust feature profile of a trusted node, which improves the detection performance for malicious nodes in underwater networks under unlabeled and imbalanced training data conditions. GTR combines the trust evaluation algorithm with the adaptive routing algorithm based on Q-Learning to provide an optimal trusted data forwarding route for underwater network applications, improving the security, reliability, and efficiency of data forwarding in underwater networks. GTR relies on the trust feature profile of trusted nodes to distinguish malicious nodes and can adaptively select the forwarding route based on the status of trusted candidate next-hop nodes, which enables GTR to better cope with the changing underwater transmission environment and more accurately detect malicious nodes, especially unknown malicious node intrusions, compared to baseline algorithms. Simulation experiments showed that, compared to baseline algorithms, GTR can provide a better malicious node detection performance and data forwarding performance. Under the condition of 15% malicious nodes and 10% unknown malicious nodes mixed in, the detection rate of malicious nodes by the underwater network configured with GTR increased by 5.4%, the error detection rate decreased by 36.4%, the packet delivery rate increased by 11.0%, the energy tax decreased by 11.4%, and the network throughput increased by 20.4%.

13.
Sensors (Basel) ; 24(15)2024 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-39123942

RESUMEN

The nowcasting of strong convective precipitation is highly demanded and presents significant challenges, as it offers meteorological services to diverse socio-economic sectors to prevent catastrophic weather events accompanied by strong convective precipitation from causing substantial economic losses and human casualties. With the accumulation of dual-polarization radar data, deep learning models based on data have been widely applied in the nowcasting of precipitation. Deep learning models exhibit certain limitations in the nowcasting approach: The evolutionary method is prone to accumulate errors throughout the iterative process (where multiple autoregressive models generate future motion fields and intensity residuals and then implicitly iterate to yield predictions), and the "regression to average" issue of autoregressive model leads to the "blurring" phenomenon. The evolution method's generator is a two-stage model: In the initial stage, the generator employs the evolution method to generate the provisional forecasted data; in the subsequent stage, the generator reprocesses the provisional forecasted data. Although the evolution method's generator is a generative adversarial network, the adversarial strategy adopted by this model ignores the significance of temporary prediction data. Therefore, this study proposes an Adversarial Autoregressive Network (AANet): Firstly, the forecasted data are generated via the two-stage generators (where FURENet directly produces the provisional forecasted data, and the Semantic Synthesis Model reprocesses the provisional forecasted data); Subsequently, structural similarity loss (SSIM loss) is utilized to mitigate the influence of the "regression to average" issue; Finally, the two-stage adversarial (Tadv) strategy is adopted to assist the two-stage generators to generate more realistic and highly similar generated data. It has been experimentally verified that AANet outperforms NowcastNet in the nowcasting of the next 1 h, with a reduction of 0.0763 in normalized error (NE), 0.377 in root mean square error (RMSE), and 4.2% in false alarm rate (FAR), as well as an enhancement of 1.45 in peak signal-to-noise ratio (PSNR), 0.0208 in SSIM, 5.78% in critical success index (CSI), 6.25% in probability of detection (POD), and 5.7% in F1.

14.
J Biomed Opt ; 29(8): 086003, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39099678

RESUMEN

Significance: Accurate identification of epidermal cells on reflectance confocal microscopy (RCM) images is important in the study of epidermal architecture and topology of both healthy and diseased skin. However, analysis of these images is currently done manually and therefore time-consuming and subject to human error and inter-expert interpretation. It is also hindered by low image quality due to noise and heterogeneity. Aim: We aimed to design an automated pipeline for the analysis of the epidermal structure from RCM images. Approach: Two attempts have been made at automatically localizing epidermal cells, called keratinocytes, on RCM images: the first is based on a rotationally symmetric error function mask, and the second on cell morphological features. Here, we propose a dual-task network to automatically identify keratinocytes on RCM images. Each task consists of a cycle generative adversarial network. The first task aims to translate real RCM images into binary images, thus learning the noise and texture model of RCM images, whereas the second task maps Gabor-filtered RCM images into binary images, learning the epidermal structure visible on RCM images. The combination of the two tasks allows one task to constrict the solution space of the other, thus improving overall results. We refine our cell identification by applying the pre-trained StarDist algorithm to detect star-convex shapes, thus closing any incomplete membranes and separating neighboring cells. Results: The results are evaluated both on simulated data and manually annotated real RCM data. Accuracy is measured using recall and precision metrics, which is summarized as the F 1 -score. Conclusions: We demonstrate that the proposed fully unsupervised method successfully identifies keratinocytes on RCM images of the epidermis, with an accuracy on par with experts' cell identification, is not constrained by limited available annotated data, and can be extended to images acquired using various imaging techniques without retraining.


Asunto(s)
Epidermis , Queratinocitos , Microscopía Confocal , Humanos , Microscopía Confocal/métodos , Epidermis/diagnóstico por imagen , Queratinocitos/citología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Células Epidérmicas , Redes Neurales de la Computación , Aprendizaje Automático no Supervisado
15.
Comput Biol Med ; 180: 108958, 2024 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-39094325

RESUMEN

Hematoxylin and eosin (H&E) staining is a crucial technique for diagnosing glioma, allowing direct observation of tissue structures. However, the H&E staining workflow necessitates intricate processing, specialized laboratory infrastructures, and specialist pathologists, rendering it expensive, labor-intensive, and time-consuming. In view of these considerations, we combine the deep learning method and hyperspectral imaging technique, aiming at accurately and rapidly converting the hyperspectral images into virtual H&E staining images. The method overcomes the limitations of H&E staining by capturing tissue information at different wavelengths, providing comprehensive and detailed tissue composition information as the realistic H&E staining. In comparison with various generator structures, the Unet exhibits substantial overall advantages, as evidenced by a mean structure similarity index measure (SSIM) of 0.7731 and a peak signal-to-noise ratio (PSNR) of 23.3120, as well as the shortest training and inference time. A comprehensive software system for virtual H&E staining, which integrates CCD control, microscope control, and virtual H&E staining technology, is developed to facilitate fast intraoperative imaging, promote disease diagnosis, and accelerate the development of medical automation. The platform reconstructs large-scale virtual H&E staining images of gliomas at a high speed of 3.81 mm2/s. This innovative approach will pave the way for a novel, expedited route in histological staining.

16.
Technol Health Care ; 2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38968065

RESUMEN

BACKGROUND: Medical imaging techniques have improved to the point where security has become a basic requirement for all applications to ensure data security and data transmission over the internet. However, clinical images hold personal and sensitive data related to the patients and their disclosure has a negative impact on their right to privacy as well as legal ramifications for hospitals. OBJECTIVE: In this research, a novel deep learning-based key generation network (Deep-KEDI) is designed to produce the secure key used for decrypting and encrypting medical images. METHODS: Initially, medical images are pre-processed by adding the speckle noise using discrete ripplet transform before encryption and are removed after decryption for more security. In the Deep-KEDI model, the zigzag generative adversarial network (ZZ-GAN) is used as the learning network to generate the secret key. RESULTS: The proposed ZZ-GAN is used for secure encryption by generating three different zigzag patterns (vertical, horizontal, diagonal) of encrypted images with its key. The zigzag cipher uses an XOR operation in both encryption and decryption using the proposed ZZ-GAN. Encrypting the original image requires a secret key generated during encryption. After identification, the encrypted image is decrypted using the generated key to reverse the encryption process. Finally, speckle noise is removed from the encrypted image in order to reconstruct the original image. CONCLUSION: According to the experiments, the Deep-KEDI model generates secret keys with an information entropy of 7.45 that is particularly suitable for securing medical images.

17.
Brief Bioinform ; 25(4)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38980369

RESUMEN

Recent studies have extensively used deep learning algorithms to analyze gene expression to predict disease diagnosis, treatment effectiveness, and survival outcomes. Survival analysis studies on diseases with high mortality rates, such as cancer, are indispensable. However, deep learning models are plagued by overfitting owing to the limited sample size relative to the large number of genes. Consequently, the latest style-transfer deep generative models have been implemented to generate gene expression data. However, these models are limited in their applicability for clinical purposes because they generate only transcriptomic data. Therefore, this study proposes ctGAN, which enables the combined transformation of gene expression and survival data using a generative adversarial network (GAN). ctGAN improves survival analysis by augmenting data through style transformations between breast cancer and 11 other cancer types. We evaluated the concordance index (C-index) enhancements compared with previous models to demonstrate its superiority. Performance improvements were observed in nine of the 11 cancer types. Moreover, ctGAN outperformed previous models in seven out of the 11 cancer types, with colon adenocarcinoma (COAD) exhibiting the most significant improvement (median C-index increase of ~15.70%). Furthermore, integrating the generated COAD enhanced the log-rank p-value (0.041) compared with using only the real COAD (p-value = 0.797). Based on the data distribution, we demonstrated that the model generated highly plausible data. In clustering evaluation, ctGAN exhibited the highest performance in most cases (89.62%). These findings suggest that ctGAN can be meaningfully utilized to predict disease progression and select personalized treatments in the medical field.


Asunto(s)
Aprendizaje Profundo , Humanos , Análisis de Supervivencia , Algoritmos , Neoplasias/genética , Neoplasias/mortalidad , Perfilación de la Expresión Génica/métodos , Redes Neurales de la Computación , Biología Computacional/métodos , Neoplasias de la Mama/genética , Neoplasias de la Mama/mortalidad , Femenino , Regulación Neoplásica de la Expresión Génica
18.
Front Immunol ; 15: 1404640, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39007128

RESUMEN

Introduction: Deep learning (DL) models predicting biomarker expression in images of hematoxylin and eosin (H&E)-stained tissues can improve access to multi-marker immunophenotyping, crucial for therapeutic monitoring, biomarker discovery, and personalized treatment development. Conventionally, these models are trained on ground truth cell labels derived from IHC-stained tissue sections adjacent to H&E-stained ones, which might be less accurate than labels from the same section. Although many such DL models have been developed, the impact of ground truth cell label derivation methods on their performance has not been studied. Methodology: In this study, we assess the impact of cell label derivation on H&E model performance, with CD3+ T-cells in lung cancer tissues as a proof-of-concept. We compare two Pix2Pix generative adversarial network (P2P-GAN)-based virtual staining models: one trained with cell labels obtained from the same tissue section as the H&E-stained section (the 'same-section' model) and one trained on cell labels from an adjacent tissue section (the 'serial-section' model). Results: We show that the same-section model exhibited significantly improved prediction performance compared to the 'serial-section' model. Furthermore, the same-section model outperformed the serial-section model in stratifying lung cancer patients within a public lung cancer cohort based on survival outcomes, demonstrating its potential clinical utility. Discussion: Collectively, our findings suggest that employing ground truth cell labels obtained through the same-section approach boosts immunophenotyping DL solutions.


Asunto(s)
Aprendizaje Profundo , Inmunofenotipificación , Neoplasias Pulmonares , Coloración y Etiquetado , Humanos , Neoplasias Pulmonares/inmunología , Neoplasias Pulmonares/patología , Coloración y Etiquetado/métodos , Biomarcadores de Tumor/metabolismo , Masculino , Linfocitos T/inmunología , Femenino
19.
Med Phys ; 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-39008794

RESUMEN

BACKGROUND: Vessel-wall volume and localized three-dimensional ultrasound (3DUS) metrics are sensitive to the change of carotid atherosclerosis in response to medical/dietary interventions. Manual segmentation of the media-adventitia boundary (MAB) and lumen-intima boundary (LIB) required to obtain these metrics is time-consuming and prone to observer variability. Although supervised deep-learning segmentation models have been proposed, training of these models requires a sizeable manually segmented training set, making larger clinical studies prohibitive. PURPOSE: We aim to develop a method to optimize pre-trained segmentation models without requiring manual segmentation to supervise the fine-tuning process. METHODS: We developed an adversarial framework called the unsupervised shape-and-texture generative adversarial network (USTGAN) to fine-tune a convolutional neural network (CNN) pre-trained on a source dataset for accurate segmentation of a target dataset. The network integrates a novel texture-based discriminator with a shape-based discriminator, which together provide feedback for the CNN to segment the target images in a similar way as the source images. The texture-based discriminator increases the accuracy of the CNN in locating the artery, thereby lowering the number of failed segmentations. Failed segmentation was further reduced by a self-checking mechanism to flag longitudinal discontinuity of the artery and by self-correction strategies involving surface interpolation followed by a case-specific tuning of the CNN. The U-Net was pre-trained by the source dataset involving 224 3DUS volumes with 136, 44, and 44 volumes in the training, validation and testing sets. The training of USTGAN involved the same training group of 136 volumes in the source dataset and 533 volumes in the target dataset. No segmented boundaries for the target cohort were available for training USTGAN. The validation and testing of USTGAN involved 118 and 104 volumes from the target cohort, respectively. The segmentation accuracy was quantified by Dice Similarity Coefficient (DSC), and incorrect localization rate (ILR). Tukey's Honestly Significant Difference multiple comparison test was employed to quantify the difference of DSCs between models and settings, where p ≤ 0.05 $p\,\le \,0.05$ was considered statistically significant. RESULTS: USTGAN attained a DSC of 85.7 ± 13.0 $85.7\,\pm \,13.0$ % in LIB and 86.2 ± 10.6 ${86.2}\,\pm \,{10.6}$ % in MAB, improving from the baseline performance of 74.6 ± 30.7 ${74.6}\,\pm \,{30.7}$ % in LIB (p < 10 - 12 $<10^{-12}$ ) and 75.7 ± 28.9 ${75.7}\,\pm \,{28.9}$ % in MAB (p < 10 - 12 $<10^{-12}$ ). Our approach outperformed six state-of-the-art domain-adaptation models (MAB: p ≤ 3.63 × 10 - 7 $p \le 3.63\,\times \,10^{-7}$ , LIB: p ≤ 9.34 × 10 - 8 $p\,\le \,9.34\,\times \,10^{-8}$ ). The proposed USTGAN also had the lowest ILR among the methods compared (LIB: 2.5%, MAB: 1.7%). CONCLUSION: Our framework improves segmentation generalizability, thereby facilitating efficient carotid disease monitoring in multicenter trials and in clinics with less expertise in 3DUS imaging.

20.
Diagnostics (Basel) ; 14(13)2024 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-39001209

RESUMEN

During neurosurgical procedures, the neuro-navigation system's accuracy is affected by the brain shift phenomenon. One popular strategy is to compensate for brain shift using intraoperative ultrasound (iUS) registration with pre-operative magnetic resonance (MR) scans. This requires a satisfactory multimodal image registration method, which is challenging due to the low image quality of ultrasound and the unpredictable nature of brain deformation during surgery. In this paper, we propose an automatic unsupervised end-to-end MR-iUS registration approach named the Dual Discriminator Bayesian Generative Adversarial Network (D2BGAN). The proposed network consists of two discriminators and a generator optimized by a Bayesian loss function to improve the functionality of the generator, and we add a mutual information loss function to the discriminator for similarity measurements. Extensive validation was performed on the RESECT and BITE datasets, where the mean target registration error (mTRE) of MR-iUS registration using D2BGAN was determined to be 0.75 ± 0.3 mm. The D2BGAN illustrated a clear advantage by achieving an 85% improvement in the mTRE over the initial error. Moreover, the results confirmed that the proposed Bayesian loss function, rather than the typical loss function, improved the accuracy of MR-iUS registration by 23%. The improvement in registration accuracy was further enhanced by the preservation of the intensity and anatomical information of the input images.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...