Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Brain Sci ; 14(5)2024 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-38790415

RESUMO

Driver fatigue represents a significant peril to global traffic safety, necessitating the advancement of potent fatigue monitoring methodologies to bolster road safety. This research introduces a conditional generative adversarial network with a classification head that integrates convolutional and attention mechanisms (CA-ACGAN) designed for the precise identification of fatigue driving states through the analysis of electroencephalography (EEG) signals. First, this study constructed a 4D feature data model capable of mirroring drivers' fatigue state, meticulously analyzing the EEG signals' frequency, spatial, and temporal dimensions. Following this, we present the CA-ACGAN framework, a novel integration of attention schemes, the bottleneck residual block, and the Transformer element. This integration was designed to refine the processing of EEG signals significantly. In utilizing a conditional generative adversarial network equipped with a classification header, the framework aims to distinguish fatigue states effectively. Moreover, it addresses the scarcity of authentic data through the generation of superior-quality synthetic data. Empirical outcomes illustrate that the CA-ACGAN model surpasses various extant methods in the fatigue detection endeavor on the SEED-VIG public dataset. Moreover, juxtaposed with leading-edge GAN models, our model exhibits an efficacy in in producing high-quality data that is clearly superior. This investigation confirms the CA-ACGAN model's utility in fatigue driving identification and suggests fresh perspectives for deep learning applications in time series data generation and processing.

2.
Electromagn Biol Med ; 43(1-2): 81-94, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38461438

RESUMO

This research focuses on improving the detection and classification of brain tumors using a method called Brain Tumor Classification using Dual-Discriminator Conditional Generative Adversarial Network (DDCGAN) for MRI images. The proposed system is implemented in the MATLAB programming language. In this study, images of the brain are taken from a dataset and processed to remove noise and enhance image quality. The brain pictures are taken from Brats MRI image dataset. The images are preprocessed using Structural interval gradient filtering to remove noises and improve the quality of the image. The preprocessing outcomes are given to feature extraction. The features are extracted by Empirical wavelet transform (EWT) and the extracted features are given to the Dual-discriminator conditional generative adversarial network (DDCGAN) for recognizing the brain tumor, which classifies the brain images into glioma, meningioma, pituitary gland, and normal. Then, the weight parameter of DDCGAN is optimized by utilizing Border Collie Optimization (BCO), which is a met a heuristic approach to handle the real world optimization issues. It maximizes the detection accurateness and reduced computational time. Implemented in MATLAB, the experimental results demonstrate that the proposed system achieves a high sensitivity of 99.58%. The BCO-DDCGAN-MRI-BTC method outperforms existing techniques in terms of precision and sensitivity when compared to methods like Kernel Basis SVM (KSVM-HHO-BTC), Joint Training of Two-Channel Deep Neural Network (JT-TCDNN-BTC), and YOLOv2 including Convolutional Neural Network (YOLOv2-CNN-BTC). The research findings indicate that the proposed method enhances the accuracy of brain tumor classification while reducing computational time and errors.


This research focuses on improving the detection and classification of brain tumors using a method called Brain Tumor Classification using Dual-Discriminator Conditional Generative Adversarial Network (DDCGAN) for MRI images. Brain tumors can significantly impact normal brain function and lead to loss of lives, making timely diagnosis crucial. However, the process of locating affected brain cells is often time-consuming. In this study, images of the brain are taken from a dataset and processed to remove noise and enhance image quality. The proposed method employs the Empirical Wavelet Transform (EWT) for feature extraction and utilizes the DDCGAN to classify brain images into different types of tumors (glioma, meningioma, pituitary gland) and normal brain images. The weight parameter of DDCGAN is optimized using Border Collie Optimization (BCO), a method to handle real-world optimization issues. This optimization aims to maximize detection accuracy and minimize computational time. Implemented in MATLAB, the experimental results demonstrate that the proposed system achieves a high sensitivity of 99.58%. The BCO-DDCGAN-MRI-BTC method outperforms existing techniques in terms of precision and sensitivity when compared to methods like Kernel Basis SVM (KSVM-HHO-BTC), Joint Training of Two-Channel Deep Neural Network (JT-TCDNN-BTC), and YOLOv2 including Convolutional Neural Network (YOLOv2-CNN-BTC). The research findings indicate that the proposed method enhances the accuracy of brain tumor classification while reducing computational time and errors.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Neoplasias Encefálicas/patologia , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Análise de Ondaletas
3.
Neural Netw ; 169: 698-712, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37976594

RESUMO

Synthetic aperture radar (SAR) images are widely used in remote sensing. Interpreting SAR images can be challenging due to their intrinsic speckle noise and grayscale nature. To address this issue, SAR colorization has emerged as a research direction to colorize gray scale SAR images while preserving the original spatial information and radiometric information. However, this research field is still in its early stages, and many limitations can be highlighted. In this paper, we propose a full research line for supervised learning-based approaches to SAR colorization. Our approach includes a protocol for generating synthetic color SAR images, several baselines, and an effective method based on the conditional generative adversarial network (cGAN) for SAR colorization. We also propose numerical assessment metrics for the problem at hand. To our knowledge, this is the first attempt to propose a research line for SAR colorization that includes a protocol, a benchmark, and a complete performance evaluation. Our extensive tests demonstrate the effectiveness of our proposed cGAN-based network for SAR colorization. The code is available at https://github.com/shenkqtx/SAR-Colorization-Benchmarking-Protocol.


Assuntos
Benchmarking , Aprendizado Profundo , Radar , Conhecimento
4.
PeerJ Comput Sci ; 9: e1667, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077569

RESUMO

Brain tumor has become one of the fatal causes of death worldwide in recent years, affecting many individuals annually and resulting in loss of lives. Brain tumors are characterized by the abnormal or irregular growth of brain tissues that can spread to nearby tissues and eventually throughout the brain. Although several traditional machine learning and deep learning techniques have been developed for detecting and classifying brain tumors, they do not always provide an accurate and timely diagnosis. This study proposes a conditional generative adversarial network (CGAN) that leverages the fine-tuning of a convolutional neural network (CNN) to achieve more precise detection of brain tumors. The CGAN comprises two parts, a generator and a discriminator, whose outputs are used as inputs for fine-tuning the CNN model. The publicly available dataset of brain tumor MRI images on Kaggle was used to conduct experiments for Datasets 1 and 2. Statistical values such as precision, specificity, sensitivity, F1-score, and accuracy were used to evaluate the results. Compared to existing techniques, our proposed CGAN model achieved an accuracy value of 0.93 for Dataset 1 and 0.97 for Dataset 2.

5.
J Med Imaging (Bellingham) ; 10(5): 054503, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37840849

RESUMO

Purpose: Generative adversarial networks (GANs) can synthesize various feasible-looking images. We showed that a GAN, specifically a conditional GAN (CGAN), can simulate breast mammograms with normal, healthy appearances and can help detect mammographically-occult (MO) cancer. However, similar to other GANs, CGANs can suffer from various artifacts, e.g., checkerboard artifacts, that may impact the quality of the final synthesized image, as well as the performance of detecting MO cancer. We explored the types of GAN artifacts that exist in mammogram simulations and their effect on MO cancer detection. Approach: We first trained a CGAN using digital mammograms (FFDMs) of 1366 women with normal/healthy breasts. Then, we tested the trained CGAN on an independent MO cancer dataset with 333 women with dense breasts (97 MO cancers). We trained a convolutional neural network (CNN) on the MO cancer dataset, in which real and simulated mammograms were fused, to identify women with MO cancer. Then, a radiologist who was independent of the development of the CGAN algorithms evaluated the entire MO cancer dataset to identify and annotate artifacts in the simulated mammograms. Results: We found four artifact types, including checkerboard, breast boundary, nipple-areola complex, and black spots around calcification artifacts, with an overall incidence rate over 69% (the individual incident rate ranged from 9% to 53%) from both normal and MO cancer samples. We then evaluated their potential impact on MO cancer detection. Even though various artifacts existed in the simulated mammogram, we found that it still provided complementary information for MO cancer detection when it was combined with the real mammograms. Conclusions: We found that artifacts were pervasive in the CGAN-simulated mammograms. However, they did not negatively affect our MO cancer detection algorithm; the simulated mammograms still provided complementary information for MO cancer detection when combined with real mammograms.

6.
Sensors (Basel) ; 23(15)2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37571533

RESUMO

Structural-response reconstruction is of great importance to enrich monitoring data for better understanding of the structural operation status. In this paper, a data-driven based structural-response reconstruction approach by generating response data via a convolutional process is proposed. A conditional generative adversarial network (cGAN) is employed to establish the spatial relationship between the global and local response in the form of a response nephogram. In this way, the reconstruction process will be independent of the physical modeling of the engineering problem. The validation via experiment of a steel frame in the lab and an in situ bridge test reveals that the reconstructed responses are of high accuracy. Theoretical analysis shows that as the sensor quantity increases, reconstruction accuracy rises and remains when the optimal sensor arrangement is reached.

7.
J Anesth Analg Crit Care ; 3(1): 19, 2023 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-37386680

RESUMO

BACKGROUND: The utilization of artificial intelligence (AI) in healthcare has significant potential to revolutionize the delivery of medical services, particularly in the field of telemedicine. In this article, we investigate the capabilities of a specific deep learning model, a generative adversarial network (GAN), and explore its potential for enhancing the telemedicine approach to cancer pain management. MATERIALS AND METHODS: We implemented a structured dataset comprising demographic and clinical variables from 226 patients and 489 telemedicine visits for cancer pain management. The deep learning model, specifically a conditional GAN, was employed to generate synthetic samples that closely resemble real individuals in terms of their characteristics. Subsequently, four machine learning (ML) algorithms were used to assess the variables associated with a higher number of remote visits. RESULTS: The generated dataset exhibits a distribution comparable to the reference dataset for all considered variables, including age, number of visits, tumor type, performance status, characteristics of metastasis, opioid dosage, and type of pain. Among the algorithms tested, random forest demonstrated the highest performance in predicting a higher number of remote visits, achieving an accuracy of 0.8 on the test data. The simulations based on ML indicated that individuals who are younger than 45 years old, and those experiencing breakthrough cancer pain, may require an increased number of telemedicine-based clinical evaluations. CONCLUSION: As the advancement of healthcare processes relies on scientific evidence, AI techniques such as GANs can play a vital role in bridging knowledge gaps and accelerating the integration of telemedicine into clinical practice. Nonetheless, it is crucial to carefully address the limitations of these approaches.

8.
Math Biosci Eng ; 20(6): 9728-9758, 2023 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-37322909

RESUMO

In order to generate high-quality single-photon emission computed tomography (SPECT) images under low-dose acquisition mode, a sinogram denoising method was studied for suppressing random oscillation and enhancing contrast in the projection domain. A conditional generative adversarial network with cross-domain regularization (CGAN-CDR) is proposed for low-dose SPECT sinogram restoration. The generator stepwise extracts multiscale sinusoidal features from a low-dose sinogram, which are then rebuilt into a restored sinogram. Long skip connections are introduced into the generator, so that the low-level features can be better shared and reused, and the spatial and angular sinogram information can be better recovered. A patch discriminator is employed to capture detailed sinusoidal features within sinogram patches; thereby, detailed features in local receptive fields can be effectively characterized. Meanwhile, a cross-domain regularization is developed in both the projection and image domains. Projection-domain regularization directly constrains the generator via penalizing the difference between generated and label sinograms. Image-domain regularization imposes a similarity constraint on the reconstructed images, which can ameliorate the issue of ill-posedness and serves as an indirect constraint on the generator. By adversarial learning, the CGAN-CDR model can achieve high-quality sinogram restoration. Finally, the preconditioned alternating projection algorithm with total variation regularization is adopted for image reconstruction. Extensive numerical experiments show that the proposed model exhibits good performance in low-dose sinogram restoration. From visual analysis, CGAN-CDR performs well in terms of noise and artifact suppression, contrast enhancement and structure preservation, particularly in low-contrast regions. From quantitative analysis, CGAN-CDR has obtained superior results in both global and local image quality metrics. From robustness analysis, CGAN-CDR can better recover the detailed bone structure of the reconstructed image for a higher-noise sinogram. This work demonstrates the feasibility and effectiveness of CGAN-CDR in low-dose SPECT sinogram restoration. CGAN-CDR can yield significant quality improvement in both projection and image domains, which enables potential applications of the proposed method in real low-dose study.


Assuntos
Tomografia Computadorizada de Emissão de Fóton Único , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Algoritmos
9.
Build Simul ; : 1-20, 2023 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-37359832

RESUMO

Prediction of indoor airflow distribution often relies on high-fidelity, computationally intensive computational fluid dynamics (CFD) simulations. Artificial intelligence (AI) models trained by CFD data can be used for fast and accurate prediction of indoor airflow, but current methods have limitations, such as only predicting limited outputs rather than the entire flow field. Furthermore, conventional AI models are not always designed to predict different outputs based on a continuous input range, and instead make predictions for one or a few discrete inputs. This work addresses these gaps using a conditional generative adversarial network (CGAN) model approach, which is inspired by current state-of-the-art AI for synthetic image generation. We create a new Boundary Condition CGAN (BC-CGAN) model by extending the original CGAN model to generate 2D airflow distribution images based on a continuous input parameter, such as a boundary condition. Additionally, we design a novel feature-driven algorithm to strategically generate training data, with the goal of minimizing the amount of computationally expensive data while ensuring training quality of the AI model. The BC-CGAN model is evaluated for two benchmark airflow cases: an isothermal lid-driven cavity flow and a non-isothermal mixed convection flow with a heated box. We also investigate the performance of the BC-CGAN models when training is stopped based on different levels of validation error criteria. The results show that the trained BC-CGAN model can predict the 2D distribution of velocity and temperature with less than 5% relative error and up to about 75,000 times faster when compared to reference CFD simulations. The proposed feature-driven algorithm shows potential for reducing the amount of data and epochs required to train the AI models while maintaining prediction accuracy, particularly when the flow changes non-linearly with respect to an input.

10.
Cureus ; 15(4): e37349, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37182031

RESUMO

Purpose Blood perfusion is an important physiological parameter that can be quantitatively assessed using various imaging techniques. Blood flow prediction in laser speckle contrast imaging is important for medical diagnosis, drug development, tissue engineering, biomedical research, and continuous monitoring. Deep learning is a new and promising approach for predicting blood flow whenever the condition varies, but it comes with a high learning cost for real-world scenarios with a variable flow value derived from multi-exposure laser speckle contrast imaging (MECI) data. A generative adversarial network (GAN) is presented in this research for the reliable prediction of blood flows in diverse scenarios in MECI. Method We suggested a time-efficient approach using a low frame rate camera that can be used to predict blood flow in MECI data by using conditional GAN architecture. Our approach is implemented by extending our work to the entire flow as well as the specific region of interest (ROI) in the flow. Results Results show that conditional GAN exhibits improved generalization ability to predict blood flow in MECI when compared to classifications-based deep learning approaches with an accuracy of 98.5% with a relative mean error of 1.57% for the whole field and 7.53% for a specific ROI. Conclusion The conditional GAN is very effective in predicting blood flows in MECI, entirely or within ROI, compared with other deep learning approaches.

11.
Sensors (Basel) ; 23(9)2023 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-37177571

RESUMO

Accurate prediction of wind power is of great significance to the stable operation of the power system and the vigorous development of the wind power industry. In order to further improve the accuracy of ultra-short-term wind power forecasting, an ultra-short-term wind power forecasting method based on the CGAN-CNN-LSTM algorithm is proposed. Firstly, the conditional generative adversarial network (CGAN) is used to fill in the missing segments of the data set. Then, the convolutional neural network (CNN) is used to extract the eigenvalues of the data, combined with the long short-term memory network (LSTM) to jointly construct a feature extraction module, and add an attention mechanism after the LSTM to assign weights to features, accelerate model convergence, and construct an ultra-short-term wind power forecasting model combined with the CGAN-CNN-LSTM. Finally, the position and function of each sensor in the Sole du Moulin Vieux wind farm in France is introduced. Then, using the sensor observation data of the wind farm as a test set, the CGAN-CNN-LSTM model was compared with the CNN-LSTM, LSTM, and SVM to verify the feasibility. At the same time, in order to prove the universality of this model and the ability of the CGAN, the model of the CNN-LSTM combined with the linear interpolation method is used for a controlled experiment with a data set of a wind farm in China. The final test results prove that the CGAN-CNN-LSTM model is not only more accurate in prediction results, but also applicable to a wide range of regions and has good value for the development of wind power.

12.
Phys Eng Sci Med ; 46(2): 703-717, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36943626

RESUMO

A radiotherapy technique called Image-Guided Radiation Therapy adopts frequent imaging throughout a treatment session. Fan Beam Computed Tomography (FBCT) based planning followed by Cone Beam Computed Tomography (CBCT) based radiation delivery drastically improved the treatment accuracy. Furtherance in terms of radiation exposure and cost can be achieved if FBCT could be replaced with CBCT. This paper proposes a Conditional Generative Adversarial Network (CGAN) for CBCT-to-FBCT synthesis. Specifically, a new architecture called Nested Residual UNet (NR-UNet) is introduced as the generator of the CGAN. A composite loss function, which comprises adversarial loss, Mean Squared Error (MSE), and Gradient Difference Loss (GDL), is used with the generator. The CGAN utilises the inter-slice dependency in the input by taking three consecutive CBCT slices to generate an FBCT slice. The model is trained using Head-and-Neck (H&N) FBCT-CBCT images of 53 cancer patients. The synthetic images exhibited a Peak Signal-to-Noise Ratio of 34.04±0.93 dB, Structural Similarity Index Measure of 0.9751±0.001 and a Mean Absolute Error of 14.81±4.70 HU. On average, the proposed model guarantees an improvement in Contrast-to-Noise Ratio four times better than the input CBCT images. The model also minimised the MSE and alleviated blurriness. Compared to the CBCT-based plan, the synthetic image results in a treatment plan closer to the FBCT-based plan. The three-slice to single-slice translation captures the three-dimensional contextual information in the input. Besides, it withstands the computational complexity associated with a three-dimensional image synthesis model. Furthermore, the results demonstrate that the proposed model is superior to the state-of-the-art methods.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Feixe Cônico/métodos , Cabeça , Imagens de Fantasmas
13.
Comput Med Imaging Graph ; 106: 102202, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36857953

RESUMO

Oral Squamous Cell Carcinoma (OSCC) is the most prevalent type of oral cancer across the globe. Histopathology examination is the gold standard for OSCC examination, where stained histopathology slides help in studying and analyzing the cell structures under a microscope to determine the stages and grading of OSCC. One of the staining methods popularly known as H&E staining is used to produce differential coloration, highlight key tissue features, and improve contrast, which makes cell analysis easier. However, the stained H&E histopathology images exhibit inter and intra-variation due to staining techniques, incubation times, and staining reagents. These variations negatively impact computer-aided diagnosis (CAD) and Machine learning algorithm's accuracy and development. A pre-processing procedure called stain normalization must be employed to reduce stain variance's negative impacts. Numerous state-of-the-art stain normalization methods are introduced. However, a robust multi-domain stain normalization approach is still required because, in a real-world situation, the OSCC histopathology images will include more than two color variations involving several domains. In this paper, a multi-domain stain translation method is proposed. The proposed method is an attention gated generator based on a Conditional Generative Adversarial Network (cGAN) with a novel objective function to enforce color distribution and the perpetual resemblance between the source and target domains. Instead of using WSI scanner images like previous techniques, the proposed method is experimented on OSCC histopathology images obtained by several conventional microscopes coupled with cameras. The proposed method receives the L* channel from the L*a*b* color space in inference mode and generates the G(a*b*) channel, which are color-adapted. The proposed technique uses mappings learned during training phases to translate the source domain to the target domain; mapping are learned using the whole color distribution of the target domain instead of one reference image. The suggested technique outperforms the four state-of-the-art methods in multi-domain OSCC histopathological translation, the claim is supported by results obtained after assessment in both quantitative and qualitative ways.


Assuntos
Carcinoma de Células Escamosas , Neoplasias de Cabeça e Pescoço , Neoplasias Bucais , Humanos , Corantes/química , Carcinoma de Células Escamosas/diagnóstico por imagem , Carcinoma de Células Escamosas de Cabeça e Pescoço , Neoplasias Bucais/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Cor
14.
Sensors (Basel) ; 23(3)2023 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-36772126

RESUMO

Ground-based telescopes are often affected by vignetting, stray light and detector nonuniformity when acquiring space images. This paper presents a space image nonuniform correction method using the conditional generative adversarial network (CGAN). Firstly, we create a dataset for training by introducing the physical vignetting model and by designing the simulation polynomial to realize the nonuniform background. Secondly, we develop a robust conditional generative adversarial network (CGAN) for learning the nonuniform background, in which we improve the network structure of the generator. The experimental results include a simulated dataset and authentic space images. The proposed method can effectively remove the nonuniform background of space images, achieve the Mean Square Error (MSE) of 4.56 in the simulation dataset, and improve the target's signal-to-noise ratio (SNR) by 43.87% in the real image correction.

15.
Artigo em Inglês | MEDLINE | ID: mdl-38495871

RESUMO

Hyperspectral imaging (HSI) has been demonstrated in various digital pathology applications. However, the intrinsic high dimensionality of hyperspectral images makes it difficult for pathologists to visualize the information. The aim of this study is to develop a method to transform hyperspectral images of hemoxylin & eosin (H&E)-stained slides to natural-color RGB histologic images for easy visualization. Hyperspectral images were obtained at 40× magnification with an automated microscopic imaging system and downsampled by various factors to generate data equivalent to different magnifications. High-resolution digital histologic RGB images were cropped and registered to the corresponding hyperspectral images as the ground truth. A conditional generative adversarial network (cGAN) was trained to output natural color RGB images of the histological tissue samples. The generated synthetic RGBs have similar color and sharpness to real RGBs. Image classification was implemented using the real and synthetic RGBs, respectively, with a pretrained network. The classification of tumor and normal tissue using the HSI-synthesized RGBs yielded a comparable but slightly higher accuracy and AUC than the real RGBs. The proposed method can reduce the acquisition time of two imaging modalities while giving pathologists access to the high information density of HSI and the quality visualization of RGBs. This study demonstrated that HSI may provide a potentially better alternative to current RGB-based pathologic imaging and thus make HSI a viable tool for histopathological diagnosis.

16.
Diagnostics (Basel) ; 12(12)2022 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-36553152

RESUMO

Skin cancer is one of the most severe forms of the disease, and it can spread to other parts of the body if not detected early. Therefore, diagnosing and treating skin cancer patients at an early stage is crucial. Since a manual skin cancer diagnosis is both time-consuming and expensive, an incorrect diagnosis is made due to the high similarity between the various skin cancers. Improved categorization of multiclass skin cancers requires the development of automated diagnostic systems. Herein, we propose a fully automatic method for classifying several skin cancers by fine-tuning the deep learning models VGG16, ResNet50, and ResNet101. Prior to model creation, the training dataset should undergo data augmentation using traditional image transformation techniques and Generative Adversarial Networks (GANs) to prevent class imbalance issues that may lead to model overfitting. In this study, we investigate the feasibility of creating dermoscopic images that have a realistic appearance using Conditional Generative Adversarial Network (CGAN) techniques. Thereafter, the traditional augmentation methods are used to augment our existing training set to improve the performance of pre-trained deep models on the skin cancer classification task. This improved performance is then compared to the models developed using the unbalanced dataset. In addition, we formed an ensemble of finely tuned transfer learning models, which we trained on balanced and unbalanced datasets. These models were used to make predictions about the data. With appropriate data augmentation, the proposed models attained an accuracy of 92% for VGG16, 92% for ResNet50, and 92.25% for ResNet101, respectively. The ensemble of these models increased the accuracy to 93.5%. A comprehensive discussion on the performance of the models concluded that using this method possibly leads to enhanced performance in skin cancer categorization compared to the efforts made in the past.

17.
Sensors (Basel) ; 22(24)2022 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-36560011

RESUMO

With the ongoing fifth-generation cellular network (5G) deployment, electromagnetic field exposure has become a critical concern. However, measurements are scarce, and accurate electromagnetic field reconstruction in a geographic region remains challenging. This work proposes a conditional generative adversarial network to address this issue. The main objective is to reconstruct the electromagnetic field exposure map accurately according to the environment's topology from a few sensors located in an outdoor urban environment. The model is trained to learn and estimate the propagation characteristics of the electromagnetic field according to the topology of a given environment. In addition, the conditional generative adversarial network-based electromagnetic field mapping is compared with simple kriging. Results show that the proposed method produces accurate estimates and is a promising solution for exposure map reconstruction.


Assuntos
Campos Eletromagnéticos
18.
Front Neurosci ; 16: 1015752, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36389231

RESUMO

Reconstruction of perceived faces from brain signals is a hot topic in brain decoding and an important application in the field of brain-computer interfaces. Existing methods do not fully consider the multiple facial attributes represented in face images, and their different activity patterns at multiple brain regions are often ignored, which causes the reconstruction performance very poor. In the current study, we propose an algorithmic framework that efficiently combines multiple face-selective brain regions for precise multi-attribute perceived face reconstruction. Our framework consists of three modules: a multi-task deep learning network (MTDLN), which is developed to simultaneously extract the multi-dimensional face features attributed to facial expression, identity and gender from one single face image, a set of linear regressions (LR), which is built to map the relationship between the multi-dimensional face features and the brain signals from multiple brain regions, and a multi-conditional generative adversarial network (mcGAN), which is used to generate the perceived face images constrained by the predicted multi-dimensional face features. We conduct extensive fMRI experiments to evaluate the reconstruction performance of our framework both subjectively and objectively. The results show that, compared with the traditional methods, our proposed framework better characterizes the multi-attribute face features in a face image, better predicts the face features from brain signals, and achieves better reconstruction performance of both seen and unseen face images in both visual effects and quantitative assessment. Moreover, besides the state-of-the-art intra-subject reconstruction performance, our proposed framework can also realize inter-subject face reconstruction to a certain extent.

19.
Phys Med Biol ; 67(22)2022 11 07.
Artigo em Inglês | MEDLINE | ID: mdl-36220014

RESUMO

Although positron emission tomography-computed tomography (PET-CT) images have been widely used, it is still challenging to accurately segment the lung tumor. The respiration, movement and imaging modality lead to large modality discrepancy of the lung tumors between PET images and CT images. To overcome these difficulties, a novel network is designed to simultaneously obtain the corresponding lung tumors of PET images and CT images. The proposed network can fuse the complementary information and preserve modality-specific features of PET images and CT images. Due to the complementarity between PET images and CT images, the two modality images should be fused for automatic lung tumor segmentation. Therefore, cross modality decoding blocks are designed to extract modality-specific features of PET images and CT images with the constraints of the other modality. The edge consistency loss is also designed to solve the problem of blurred boundaries of PET images and CT images. The proposed method is tested on 126 PET-CT images with non-small cell lung cancer, and Dice similarity coefficient scores of lung tumor segmentation reach 75.66 ± 19.42 in CT images and 79.85 ± 16.76 in PET images, respectively. Extensive comparisons with state-of-the-art lung tumor segmentation methods have also been performed to demonstrate the superiority of the proposed network.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
20.
Front Public Health ; 10: 1012929, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36187623

RESUMO

Purpose: This study aimed to develop a deep learning model to generate a postoperative corneal axial curvature map of femtosecond laser arcuate keratotomy (FLAK) based on corneal tomography using a pix2pix conditional generative adversarial network (pix2pix cGAN) for surgical planning. Methods: A total of 451 eyes of 318 nonconsecutive patients were subjected to FLAK for corneal astigmatism correction during cataract surgery. Paired or single anterior penetrating FLAKs were performed at an 8.0-mm optical zone with a depth of 90% using a femtosecond laser (LenSx laser, Alcon Laboratories, Inc.). Corneal tomography images were acquired from Oculus Pentacam HR (Optikgeräte GmbH, Wetzlar, Germany) before and 3 months after the surgery. The raw data required for analysis consisted of the anterior corneal curvature for a range of ± 3.5 mm around the corneal apex in 0.1-mm steps, which the pseudo-color corneal curvature map synthesized was based on. The deep learning model used was a pix2pix conditional generative adversarial network. The prediction accuracy of synthetic postoperative corneal astigmatism in zones of different diameters centered on the corneal apex was assessed using vector analysis. The synthetic postoperative corneal axial curvature maps were compared with the real postoperative corneal axial curvature maps using the structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR). Results: A total of 386 pairs of preoperative and postoperative corneal tomography data were included in the training set, whereas 65 preoperative data were retrospectively included in the test set. The correlation coefficient between synthetic and real postoperative astigmatism (difference vector) in the 3-mm zone was 0.89, and that between surgically induced astigmatism (SIA) was 0.93. The mean absolute errors of SIA for real and synthetic postoperative corneal axial curvature maps in the 1-, 3-, and 5-mm zone were 0.20 ± 0.25, 0.12 ± 0.17, and 0.09 ± 0.13 diopters, respectively. The average SSIM and PSNR of the 3-mm zone were 0.86 ± 0.04 and 18.24 ± 5.78, respectively. Conclusion: Our results showed that the application of pix2pix cGAN can synthesize plausible postoperative corneal tomography for FLAK, showing the possibility of using GAN to predict corneal tomography, with the potential of applying artificial intelligence to construct surgical planning models.


Assuntos
Astigmatismo , Inteligência Artificial , Astigmatismo/cirurgia , Topografia da Córnea , Humanos , Lasers , Estudos Retrospectivos , Tomografia , Acuidade Visual
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA