Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 108
Filtrar
1.
Brief Bioinform ; 24(6)2023 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-37864295

RESUMO

The widespread adoption of high-throughput omics technologies has exponentially increased the amount of protein sequence data involved in many salient disease pathways and their respective therapeutics and diagnostics. Despite the availability of large-scale sequence data, the lack of experimental fitness annotations underpins the need for self-supervised and unsupervised machine learning (ML) methods. These techniques leverage the meaningful features encoded in abundant unlabeled sequences to accomplish complex protein engineering tasks. Proficiency in the rapidly evolving fields of protein engineering and generative AI is required to realize the full potential of ML models as a tool for protein fitness landscape navigation. Here, we support this work by (i) providing an overview of the architecture and mathematical details of the most successful ML models applicable to sequence data (e.g. variational autoencoders, autoregressive models, generative adversarial neural networks, and diffusion models), (ii) guiding how to effectively implement these models on protein sequence data to predict fitness or generate high-fitness sequences and (iii) highlighting several successful studies that implement these techniques in protein engineering (from paratope regions and subcellular localization prediction to high-fitness sequences and protein design rules generation). By providing a comprehensive survey of model details, novel architecture developments, comparisons of model applications, and current challenges, this study intends to provide structured guidance and robust framework for delivering a prospective outlook in the ML-driven protein engineering field.


Assuntos
Redes Neurais de Computação , Aprendizado de Máquina não Supervisionado , Sequência de Aminoácidos , Exercício Físico , Proteínas/genética
2.
Sensors (Basel) ; 24(12)2024 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-38931676

RESUMO

In the realm of offline handwritten text recognition, numerous normalization algorithms have been developed over the years to serve as preprocessing steps prior to applying automatic recognition models to handwritten text scanned images. These algorithms have demonstrated effectiveness in enhancing the overall performance of recognition architectures. However, many of these methods rely heavily on heuristic strategies that are not seamlessly integrated with the recognition architecture itself. This paper introduces the use of a Pix2Pix trainable model, a specific type of conditional generative adversarial network, as the method to normalize handwritten text images. Also, this algorithm can be seamlessly integrated as the initial stage of any deep learning architecture designed for handwritten recognition tasks. All of this facilitates training the normalization and recognition components as a unified whole, while still maintaining some interpretability of each module. Our proposed normalization approach learns from a blend of heuristic transformations applied to text images, aiming to mitigate the impact of intra-personal handwriting variability among different writers. As a result, it achieves slope and slant normalizations, alongside other conventional preprocessing objectives, such as normalizing the size of text ascenders and descenders. We will demonstrate that the proposed architecture replicates, and in certain cases surpasses, the results of a widely used heuristic algorithm across two metrics and when integrated as the first step of a deep recognition architecture.

3.
J Xray Sci Technol ; 32(4): 857-911, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38701131

RESUMO

BACKGROUND: The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE: This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS: Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS: Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS: The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION: Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.


Assuntos
Aprendizado Profundo , Imagem Multimodal , Neoplasias , Humanos , Imagem Multimodal/métodos , Neoplasias/diagnóstico por imagem , Neoplasias/classificação , Redes Neurais de Computação
4.
J Xray Sci Technol ; 32(4): 1011-1039, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38759091

RESUMO

Retinal disorders pose a serious threat to world healthcare because they frequently result in visual loss or impairment. For retinal disorders to be diagnosed precisely, treated individually, and detected early, deep learning is a necessary subset of artificial intelligence. This paper provides a complete approach to improve the accuracy and reliability of retinal disease identification using images from OCT (Retinal Optical Coherence Tomography). The Hybrid Model GIGT, which combines Generative Adversarial Networks (GANs), Inception, and Game Theory, is a novel method for diagnosing retinal diseases using OCT pictures. This technique, which is carried out in Python, includes preprocessing images, feature extraction, GAN classification, and a game-theoretic examination. Resizing, grayscale conversion, noise reduction using Gaussian filters, contrast enhancement using Contrast Limiting Adaptive Histogram Equalization (CLAHE), and edge recognition via the Canny technique are all part of the picture preparation step. These procedures set up the OCT pictures for efficient analysis. The Inception model is used for feature extraction, which enables the extraction of discriminative characteristics from the previously processed pictures. GANs are used for classification, which improves accuracy and resilience by adding a strategic and dynamic aspect to the diagnostic process. Additionally, a game-theoretic analysis is utilized to evaluate the security and dependability of the model in the face of hostile attacks. Strategic analysis and deep learning work together to provide a potent diagnostic tool. This suggested model's remarkable 98.2% accuracy rate shows how this method has the potential to improve the detection of retinal diseases, improve patient outcomes, and address the worldwide issue of visual impairment.


Assuntos
Teoria dos Jogos , Redes Neurais de Computação , Doenças Retinianas , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Doenças Retinianas/diagnóstico por imagem , Retina/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos
5.
Sensors (Basel) ; 23(20)2023 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-37896655

RESUMO

Machine learning (ML) and deep learning (DL) have achieved great success in different tasks. These include computer vision, image segmentation, natural language processing, predicting classification, evaluating time series, and predicting values based on a series of variables. As artificial intelligence progresses, new techniques are being applied to areas like optical spectroscopy and its uses in specific fields, such as the agrifood industry. The performance of ML and DL techniques generally improves with the amount of data available. However, it is not always possible to obtain all the necessary data for creating a robust dataset. In the particular case of agrifood applications, dataset collection is generally constrained to specific periods. Weather conditions can also reduce the possibility to cover the entire range of classifications with the consequent generation of imbalanced datasets. To address this issue, data augmentation (DA) techniques are employed to expand the dataset by adding slightly modified copies of existing data. This leads to a dataset that includes values from laboratory tests, as well as a collection of synthetic data based on the real data. This review work will present the application of DA techniques to optical spectroscopy datasets obtained from real agrifood industry applications. The reviewed methods will describe the use of simple DA techniques, such as duplicating samples with slight changes, as well as the utilization of more complex algorithms based on deep learning generative adversarial networks (GANs), and semi-supervised generative adversarial networks (SGANs).

6.
Sensors (Basel) ; 23(13)2023 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-37447624

RESUMO

This paper presents an efficient underwater image enhancement method, named ECO-GAN, to address the challenges of color distortion, low contrast, and motion blur in underwater robot photography. The proposed method is built upon a preprocessing framework using a generative adversarial network. ECO-GAN incorporates a convolutional neural network that specifically targets three underwater issues: motion blur, low brightness, and color deviation. To optimize computation and inference speed, an encoder is employed to extract features, whereas different enhancement tasks are handled by dedicated decoders. Moreover, ECO-GAN employs cross-stage fusion modules between the decoders to strengthen the connection and enhance the quality of output images. The model is trained using supervised learning with paired datasets, enabling blind image enhancement without additional physical knowledge or prior information. Experimental results demonstrate that ECO-GAN effectively achieves denoising, deblurring, and color deviation removal simultaneously. Compared with methods relying on individual modules or simple combinations of multiple modules, our proposed method achieves superior underwater image enhancement and offers the flexibility for expansion into multiple underwater image enhancement functions.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Aumento da Imagem , Tomografia Computadorizada por Raios X , Movimento (Física)
7.
Sensors (Basel) ; 23(2)2023 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-36679705

RESUMO

Digitization of most of the services that people use in their everyday life has, among others, led to increased needs for cybersecurity. As digital tools increase day by day and new software and hardware launch out-of-the box, detection of known existing vulnerabilities, or zero-day as they are commonly known, becomes one of the most challenging situations for cybersecurity experts. Zero-day vulnerabilities, which can be found in almost every new launched software and/or hardware, can be exploited instantly by malicious actors with different motives, posing threats for end-users. In this context, this study proposes and describes a holistic methodology starting from the generation of zero-day-type, yet realistic, data in tabular format and concluding to the evaluation of a Neural Network zero-day attacks' detector which is trained with and without synthetic data. This methodology involves the design and employment of Generative Adversarial Networks (GANs) for synthetically generating a new and larger dataset of zero-day attacks data. The newly generated, by the Zero-Day GAN (ZDGAN), dataset is then used to train and evaluate a Neural Network classifier for zero-day attacks. The results show that the generation of zero-day attacks data in tabular format reaches an equilibrium after about 5000 iterations and produces data that are almost identical to the original data samples. Last but not least, it should be mentioned that the Neural Network model that was trained with the dataset containing the ZDGAN generated samples outperformed the same model when the later was trained with only the original dataset and achieved results of high validation accuracy and minimal validation loss.


Assuntos
Aprendizado Profundo , Humanos , Segurança Computacional , Decoração de Interiores e Mobiliário , Motivação , Redes Neurais de Computação
8.
Sensors (Basel) ; 23(9)2023 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-37177610

RESUMO

For indoor localisation, a challenge in data-driven localisation is to ensure sufficient data to train the prediction model to produce a good accuracy. However, for WiFi-based data collection, human effort is still required to capture a large amount of data as the representation Received Signal Strength (RSS) could easily be affected by obstacles and other factors. In this paper, we propose an extendGAN+ pipeline that leverages up-sampling with the Dirichlet distribution to improve location prediction accuracy with small sample sizes, applies transferred WGAN-GP for synthetic data generation, and ensures data quality with a filtering module. The results highlight the effectiveness of the proposed data augmentation method not only by localisation performance but also showcase the variety of RSS patterns it could produce. Benchmarking against the baseline methods such as fingerprint, random forest, and its base dataset with localisation models, extendGAN+ shows improvements of up to 23.47%, 25.35%, and 18.88% respectively. Furthermore, compared to existing GAN+ methods, it reduces training time by a factor of four due to transfer learning and improves performance by 10.13%.

9.
Sensors (Basel) ; 23(19)2023 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-37837143

RESUMO

Research on image-inpainting tasks has mainly focused on enhancing performance by augmenting various stages and modules. However, this trend does not consider the increase in the number of model parameters and operational memory, which increases the burden on computational resources. To solve this problem, we propose a Parametric Efficient Image InPainting Network (PEIPNet) for efficient and effective image-inpainting. Unlike other state-of-the-art methods, the proposed model has a one-stage inpainting framework in which depthwise and pointwise convolutions are adopted to reduce the number of parameters and computational cost. To generate semantically appealing results, we selected three unique components: spatially-adaptive denormalization (SPADE), dense dilated convolution module (DDCM), and efficient self-attention (ESA). SPADE was adopted to conditionally normalize activations according to the mask in order to distinguish between damaged and undamaged regions. The DDCM was employed at every scale to overcome the gradient-vanishing obstacle and gradually fill in the pixels by capturing global information along the feature maps. The ESA was utilized to obtain clues from unmasked areas by extracting long-range information. In terms of efficiency, our model has the lowest operational memory compared with other state-of-the-art methods. Both qualitative and quantitative experiments demonstrate the generalized inpainting of our method on three public datasets: Paris StreetView, CelebA, and Places2.

10.
Sensors (Basel) ; 23(16)2023 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-37631631

RESUMO

Deep-learning-based image inpainting methods have made remarkable advancements, particularly in object removal tasks. The removal of face masks has gained significant attention, especially in the wake of the COVID-19 pandemic, and while numerous methods have successfully addressed the removal of small objects, removing large and complex masks from faces remains demanding. This paper presents a novel two-stage network for unmasking faces considering the intricate facial features typically concealed by masks, such as noses, mouths, and chins. Additionally, the scarcity of paired datasets comprising masked and unmasked face images poses an additional challenge. In the first stage of our proposed model, we employ an autoencoder-based network for binary segmentation of the face mask. Subsequently, in the second stage, we introduce a generative adversarial network (GAN)-based network enhanced with attention and Masked-Unmasked Region Fusion (MURF) mechanisms to focus on the masked region. Our network generates realistic and accurate unmasked faces that resemble the original faces. We train our model on paired unmasked and masked face images sourced from CelebA, a large public dataset, and evaluate its performance on multi-scale masked faces. The experimental results illustrate that the proposed method surpasses the current state-of-the-art techniques in both qualitative and quantitative metrics. It achieves a Peak Signal-to-Noise Ratio (PSNR) improvement of 4.18 dB over the second-best method, with the PSNR reaching 30.96. Additionally, it exhibits a 1% increase in the Structural Similarity Index Measure (SSIM), achieving a value of 0.95.


Assuntos
COVID-19 , Máscaras , Humanos , Pandemias , Equipamento de Proteção Individual , Benchmarking
11.
Eur Arch Otorhinolaryngol ; 279(9): 4241-4246, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35460377

RESUMO

BACKGROUND AND OBJECTIVES: BPPV (benign paroxysmal positional vertigo) is a syndrome marked by brief bouts of vertigo accompanied by rapid changes in head position. Recent ongoing therapeutic approaches used are vestibular rehabilitation exercises and physical maneuvers like the Epley maneuver, Semont maneuver. Gans repositioning maneuver (GRM) is a new hybrid maneuver, consisting of safe and comfortable series of postures that can be conveniently applied on patients with any spinal pathology or even in elderly. METHODS: Randomized controlled/clinical trials of the Gans maneuver were identified. The proportion of patients who improved as a result of each intervention was assessed, as well as the conversion of a 'positive' Dix-Hallpike test to a 'negative' Dix-Hallpike test. RESULTS: Improvement was seen in almost all patients with the Gans maneuver and the Epley Maneuver in three trials with the pooled estimate for random effect model is 1.12 [0.87; 1.43: 100%]. There were no significant side effects from the treatment. DISCUSSION: This study shows that the Gans maneuver is a safe and effective treatment for patients suffering from posterior canal BPPV. TRIAL REGISTRATION: The review is registered in Prospero with no. CRD42021234100.


Assuntos
Vertigem Posicional Paroxística Benigna , Posicionamento do Paciente , Idoso , Vertigem Posicional Paroxística Benigna/reabilitação , Humanos , Exame Físico , Postura , Resultado do Tratamento
12.
Sensors (Basel) ; 22(9)2022 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-35591224

RESUMO

In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use "one shot" generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a "complementary mask" module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database.


Assuntos
Algoritmos , Bases de Dados Factuais , Humanos
13.
Sensors (Basel) ; 22(11)2022 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-35684918

RESUMO

Deep learning models have been used in several domains, however, adjusting is still required to be applied in sensitive areas such as medical imaging. As the use of technology in the medical domain is needed because of the time limit, the level of accuracy assures trustworthiness. Because of privacy concerns, machine learning applications in the medical field are unable to use medical data. For example, the lack of brain MRI images makes it difficult to classify brain tumors using image-based classification. The solution to this challenge was achieved through the application of Generative Adversarial Network (GAN)-based augmentation techniques. Deep Convolutional GAN (DCGAN) and Vanilla GAN are two examples of GAN architectures used for image generation. In this paper, a framework, denoted as BrainGAN, for generating and classifying brain MRI images using GAN architectures and deep learning models was proposed. Consequently, this study proposed an automatic way to check that generated images are satisfactory. It uses three models: CNN, MobileNetV2, and ResNet152V2. Training the deep transfer models with images made by Vanilla GAN and DCGAN, and then evaluating their performance on a test set composed of real brain MRI images. From the results of the experiment, it was found that the ResNet152V2 model outperformed the other two models. The ResNet152V2 achieved 99.09% accuracy, 99.12% precision, 99.08% recall, 99.51% area under the curve (AUC), and 0.196 loss based on the brain MRI images generated by DCGAN architecture.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Neuroimagem
14.
Sensors (Basel) ; 21(23)2021 Nov 26.
Artigo em Inglês | MEDLINE | ID: mdl-34883905

RESUMO

Wheat yellow rust is a common agricultural disease that affects the crop every year across the world. The disease not only negatively impacts the quality of the yield but the quantity as well, which results in adverse impact on economy and food supply. It is highly desired to develop methods for fast and accurate detection of yellow rust in wheat crop; however, high-resolution images are not always available which hinders the ability of trained models in detection tasks. The approach presented in this study harnesses the power of super-resolution generative adversarial networks (SRGAN) for upsampling the images before using them to train deep learning models for the detection of wheat yellow rust. After preprocessing the data for noise removal, SRGANs are used for upsampling the images to increase their resolution which helps convolutional neural network (CNN) in learning high-quality features during training. This study empirically shows that SRGANs can be used effectively to improve the quality of images and produce significantly better results when compared with models trained using low-resolution images. This is evident from the results obtained on upsampled images, i.e., 83% of overall test accuracy, which are substantially better than the overall test accuracy achieved for low-resolution images, i.e., 75%. The proposed approach can be used in other real-world scenarios where images are of low resolution due to the unavailability of high-resolution camera in edge devices.


Assuntos
Basidiomycota , Processamento de Imagem Assistida por Computador , Agricultura , Redes Neurais de Computação , Triticum
15.
Sensors (Basel) ; 21(11)2021 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-34071944

RESUMO

The application of machine learning and artificial intelligence techniques in the medical world is growing, with a range of purposes: from the identification and prediction of possible diseases to patient monitoring and clinical decision support systems. Furthermore, the widespread use of remote monitoring medical devices, under the umbrella of the "Internet of Medical Things" (IoMT), has simplified the retrieval of patient information as they allow continuous monitoring and direct access to data by healthcare providers. However, due to possible issues in real-world settings, such as loss of connectivity, irregular use, misuse, or poor adherence to a monitoring program, the data collected might not be sufficient to implement accurate algorithms. For this reason, data augmentation techniques can be used to create synthetic datasets sufficiently large to train machine learning models. In this work, we apply the concept of generative adversarial networks (GANs) to perform a data augmentation from patient data obtained through IoMT sensors for Chronic Obstructive Pulmonary Disease (COPD) monitoring. We also apply an explainable AI algorithm to demonstrate the accuracy of the synthetic data by comparing it to the real data recorded by the sensors. The results obtained demonstrate how synthetic datasets created through a well-structured GAN are comparable with a real dataset, as validated by a novel approach based on machine learning.


Assuntos
Inteligência Artificial , Internet das Coisas , Algoritmos , Humanos , Aprendizado de Máquina
16.
Neuroimage ; 223: 117308, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32889117

RESUMO

Multiple sclerosis (MS) is a demyelinating and inflammatory disease of the central nervous system (CNS). The demyelination process can be repaired by the generation of a new sheath of myelin around the axon, a process termed remyelination. In MS patients, the demyelination-remyelination cycles are highly dynamic. Over the years, magnetic resonance imaging (MRI) has been increasingly used in the diagnosis of MS and it is currently the most useful paraclinical tool to assess this diagnosis. However, conventional MRI pulse sequences are not specific for pathological mechanisms such as demyelination and remyelination. Recently, positron emission tomography (PET) with radiotracer [11C]PIB has become a promising tool to measure in-vivo myelin content changes which is essential to push forward our understanding of mechanisms involved in the pathology of MS, and to monitor individual patients in the context of clinical trials focused on repair therapies. However, PET imaging is invasive due to the injection of a radioactive tracer. Moreover, it is an expensive imaging test and not offered in the majority of medical centers in the world. In this work, by using multisequence MRI, we thus propose a method to predict the parametric map of [11C]PIB PET, from which we derived the myelin content changes in a longitudinal analysis of patients with MS. The method is based on the proposed conditional flexible self-attention GAN (CF-SAGAN) which is specifically adjusted for high-dimensional medical images and able to capture the relationships between the spatially separated lesional regions during the image synthesis process. Jointly applying the sketch-refinement process and the proposed attention regularization that focuses on the MS lesions, our approach is shown to outperform the state-of-the-art methods qualitatively and quantitatively. Specifically, our method demonstrated a superior performance for the prediction of myelin content at voxel-wise level. More important, our method for the prediction of myelin content changes in patients with MS shows similar clinical correlations to the PET-derived gold standard indicating the potential for clinical management of patients with MS.


Assuntos
Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Esclerose Múltipla/diagnóstico por imagem , Bainha de Mielina/metabolismo , Bainha de Mielina/patologia , Tomografia por Emissão de Pósitrons , Adulto , Encéfalo/metabolismo , Encéfalo/patologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Estudos Longitudinais , Masculino , Esclerose Múltipla/metabolismo , Esclerose Múltipla/patologia
17.
J Biomed Inform ; 109: 103515, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32771540

RESUMO

Causal inference often relies on the counterfactual framework, which requires that treatment assignment is independent of the outcome, known as strong ignorability. Approaches to enforcing strong ignorability in causal analyses of observational data include weighting and matching methods. Effect estimates, such as the average treatment effect (ATE), are then estimated as expectations under the re-weighted or matched distribution, P. The choice of P is important and can impact the interpretation of the effect estimate and the variance of effect estimates. In this work, instead of specifying P, we learn a distribution that simultaneously maximizes coverage and minimizes variance of ATE estimates. In order to learn this distribution, this research proposes a generative adversarial network (GAN)-based model called the Counterfactual χ-GAN (cGAN), which also learns feature-balancing weights and supports unbiased causal estimation in the absence of unobserved confounding. Our model minimizes the Pearson χ2-divergence, which we show simultaneously maximizes coverage and minimizes the variance of importance sampling estimates. To our knowledge, this is the first such application of the Pearson χ2-divergence. We demonstrate the effectiveness of cGAN in achieving feature balance relative to established weighting methods in simulation and with real-world medical data.


Assuntos
Causalidade , Simulação por Computador , Humanos
18.
Adv Exp Med Biol ; 1213: 23-44, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32030661

RESUMO

Medical images have been widely used in clinics, providing visual representations of under-skin tissues in human body. By applying different imaging protocols, diverse modalities of medical images with unique characteristics of visualization can be produced. Considering the cost of scanning high-quality single modality images or homogeneous multiple modalities of images, medical image synthesis methods have been extensively explored for clinical applications. Among them, deep learning approaches, especially convolutional neural networks (CNNs) and generative adversarial networks (GANs), have rapidly become dominating for medical image synthesis in recent years. In this chapter, based on a general review of the medical image synthesis methods, we will focus on introducing typical CNNs and GANs models for medical image synthesis. Especially, we will elaborate our recent work about low-dose to high-dose PET image synthesis, and cross-modality MR image synthesis, using these models.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Humanos
19.
Neuroimage ; 174: 550-562, 2018 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-29571715

RESUMO

Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures.


Assuntos
Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Adulto , Aprendizado Profundo , Feminino , Humanos , Masculino , Doses de Radiação , Reprodutibilidade dos Testes , Razão Sinal-Ruído , Adulto Jovem
20.
Adv Exp Med Biol ; 1093: 181-191, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30306482

RESUMO

In this chapter we propose a new system that allows reliable acetabular cup placement in total hip arthroplasty (THA) when the surgery is operated in lateral approach. Conceptually it combines the accuracy of computer-generated patient-specific morphology information with an easy-to-use mechanical guide, which effectively uses natural gravity as the angular reference. The former is achieved by using a statistical shape model-based 2D-3D reconstruction technique that can generate a scaled, patient-specific 3D shape model of the pelvis from a single conventional anteroposterior (AP) pelvic X-ray radiograph. The reconstructed 3D shape model facilitates a reliable and accurate co-registration of the mechanical guide with the patient's anatomy in the operating theater. We validated the accuracy of our system by conducting experiments on placing seven cups to four pelvises with different morphologies. Taking the measurements from an image-free navigation system as the ground truth, our system showed an average accuracy of 2. 1 ± 0. 7∘ for inclination and an average accuracy of 1. 2 ± 1. 4∘ for anteversion.


Assuntos
Acetábulo , Artroplastia de Quadril , Cirurgia Assistida por Computador , Humanos , Imageamento Tridimensional , Modelos Anatômicos , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa