Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Magn Reson Med ; 2021 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-34611937

RESUMO

PURPOSE: To automate the segmentation of the peripheral arteries and veins in the lower extremities based on ferumoxytol-enhanced MR angiography (FE-MRA). METHODS: Our automated pipeline has 2 sequential stages. In the first stage, we used a 3D U-Net with local attention gates, which was trained based on a combination of the Focal Tversky loss with region mutual loss under a deep supervision mechanism to segment the vasculature from the high-resolution FE-MRA datasets. In the second stage, we used time-resolved images to separate the arteries from the veins. Because the ultimate segmentation quality of the arteries and veins relies on the performance of the first stage, we thoroughly evaluated the different aspects of the segmentation network and compared its performance in blood vessel segmentation with currently accepted state-of-the-art networks, including Volumetric-Net, DeepVesselNet-FCN, and Uception. RESULTS: We achieved a competitive F1 = 0.8087 and recall = 0.8410 for blood vessel segmentation compared with F1 = (0.7604, 0.7573, 0.7651) and recall = (0.7791, 0.7570, 0.7774) obtained with Volumetric-Net, DeepVesselNet-FCN, and Uception. For the artery and vein separation stage, we achieved F1 = (0.8274/0.7863) in the calf region, which is the most challenging region in peripheral arteries and veins segmentation. CONCLUSION: Our pipeline is capable of fully automatic vessel segmentation based on FE-MRA without need for human interaction in <4 min. This method improves upon manual segmentation by radiologists, which routinely takes several hours.

2.
Light Sci Appl ; 10(1): 196, 2021 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-34561415

RESUMO

Spatially-engineered diffractive surfaces have emerged as a powerful framework to control light-matter interactions for statistical inference and the design of task-specific optical components. Here, we report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (Ni) and output (No), where Ni and No represent the number of pixels at the input and output fields-of-view (FOVs), respectively. First, we consider a single diffractive surface and use a matrix pseudoinverse-based method to determine the complex-valued transmission coefficients of the diffractive features/neurons to all-optically perform a desired/target linear transformation. In addition to this data-free design approach, we also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation. We compared the all-optical transformation errors and diffraction efficiencies achieved using data-free designs as well as data-driven (deep learning-based) diffractive designs to all-optically perform (i) arbitrarily-chosen complex-valued transformations including unitary, nonunitary, and noninvertible transforms, (ii) 2D discrete Fourier transformation, (iii) arbitrary 2D permutation operations, and (iv) high-pass filtered coherent imaging. Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is ≥Ni × No, both design methods succeed in all-optical implementation of the target transformation, achieving negligible error. However, compared to data-free designs, deep learning-based diffractive designs are found to achieve significantly larger diffraction efficiencies for a given N and their all-optical transformations are more accurate for N < Ni × No. These conclusions are generally applicable to various optical processors that employ spatially-engineered diffractive surfaces.

3.
Nat Commun ; 12(1): 4884, 2021 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-34385460

RESUMO

Pathology is practiced by visual inspection of histochemically stained tissue slides. While the hematoxylin and eosin (H&E) stain is most commonly used, special stains can provide additional contrast to different tissue components. Here, we demonstrate the utility of supervised learning-based computational stain transformation from H&E to special stains (Masson's Trichrome, periodic acid-Schiff and Jones silver stain) using kidney needle core biopsy tissue sections. Based on the evaluation by three renal pathologists, followed by adjudication by a fourth pathologist, we show that the generation of virtual special stains from existing H&E images improves the diagnosis of several non-neoplastic kidney diseases, sampled from 58 unique subjects (P = 0.0095). A second study found that the quality of the computationally generated special stains was statistically equivalent to those which were histochemically stained. This stain-to-stain transformation framework can improve preliminary diagnoses when additional special stains are needed, also providing significant savings in time and cost.


Assuntos
Biópsia com Agulha de Grande Calibre/métodos , Aprendizado Profundo , Diagnóstico por Computador/métodos , Nefropatias/patologia , Rim/patologia , Coloração e Rotulagem/métodos , Algoritmos , Corantes/química , Corantes/classificação , Corantes/normas , Diagnóstico Diferencial , Humanos , Nefropatias/diagnóstico , Patologia Clínica/métodos , Patologia Clínica/normas , Padrões de Referência , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Coloração e Rotulagem/normas
4.
Sci Adv ; 7(13)2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33771863

RESUMO

We demonstrate optical networks composed of diffractive layers trained using deep learning to encode the spatial information of objects into the power spectrum of the diffracted light, which are used to classify objects with a single-pixel spectroscopic detector. Using a plasmonic nanoantenna-based detector, we experimentally validated this single-pixel machine vision framework at terahertz spectrum to optically classify the images of handwritten digits by detecting the spectral power of the diffracted light at ten distinct wavelengths, each representing one class/digit. We also coupled this diffractive network-based spectral encoding with a shallow electronic neural network, which was trained to rapidly reconstruct the images of handwritten digits based on solely the spectral power detected at these ten distinct wavelengths, demonstrating task-specific image decompression. This single-pixel machine vision framework can also be extended to other spectral-domain measurement systems to enable new 3D imaging and sensing modalities integrated with diffractive network-based spectral encoding of information.

5.
ACS Nano ; 15(4): 6305-6315, 2021 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-33543919

RESUMO

Conventional spectrometers are limited by trade-offs set by size, cost, signal-to-noise ratio (SNR), and spectral resolution. Here, we demonstrate a deep learning-based spectral reconstruction framework using a compact and low-cost on-chip sensing scheme that is not constrained by many of the design trade-offs inherent to grating-based spectroscopy. The system employs a plasmonic spectral encoder chip containing 252 different tiles of nanohole arrays fabricated using a scalable and low-cost imprint lithography method, where each tile has a specific geometry and thus a specific optical transmission spectrum. The illumination spectrum of interest directly impinges upon the plasmonic encoder, and a CMOS image sensor captures the transmitted light without any lenses, gratings, or other optical components in between, making the entire hardware highly compact, lightweight, and field-portable. A trained neural network then reconstructs the unknown spectrum using the transmitted intensity information from the spectral encoder in a feed-forward and noniterative manner. Benefiting from the parallelization of neural networks, the average inference time per spectrum is ∼28 µs, which is much faster compared to other computational spectroscopy approaches. When blindly tested on 14 648 unseen spectra with varying complexity, our deep-learning based system identified 96.86% of the spectral peaks with an average peak localization error, bandwidth error, and height error of 0.19 nm, 0.18 nm, and 7.60%, respectively. This system is also highly tolerant to fabrication defects that may arise during the imprint lithography process, which further makes it ideal for applications that demand cost-effective, field-portable, and sensitive high-resolution spectroscopy tools.

6.
Artigo em Inglês | MEDLINE | ID: mdl-33223801

RESUMO

Optical machine learning offers advantages in terms of power efficiency, scalability and computation speed. Recently, an optical machine learning method based on Diffractive Deep Neural Networks (D2NNs) has been introduced to execute a function as the input light diffracts through passive layers, designed by deep learning using a computer. Here we introduce improvements to D2NNs by changing the training loss function and reducing the impact of vanishing gradients in the error back-propagation step. Using five phase-only diffractive layers, we numerically achieved a classification accuracy of 97.18% and 89.13% for optical recognition of handwritten digits and fashion products, respectively; using both phase and amplitude modulation (complex-valued) at each layer, our inference performance improved to 97.81% and 89.32%, respectively. Furthermore, we report the integration of D2NNs with electronic neural networks to create hybrid-classifiers that significantly reduce the number of input pixels into an electronic network using an ultra-compact front-end D2NN with a layer-to-layer distance of a few wavelengths, also reducing the complexity of the successive electronic network. Using a 5-layer phase-only D2NN jointly-optimized with a single fully-connected electronic layer, we achieved a classification accuracy of 98.71% and 90.04% for the recognition of handwritten digits and fashion products, respectively. Moreover, the input to the electronic network was compressed by >7.8 times down to 10×10 pixels. Beyond creating low-power and high-frame rate machine learning platforms, D2NN-based hybrid neural networks will find applications in smart optical imager and sensor design.

7.
Light Sci Appl ; 9: 118, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32685139

RESUMO

Early identification of pathogenic bacteria in food, water, and bodily fluids is very important and yet challenging, owing to sample complexities and large sample volumes that need to be rapidly screened. Existing screening methods based on plate counting or molecular analysis present various tradeoffs with regard to the detection time, accuracy/sensitivity, cost, and sample preparation complexity. Here, we present a computational live bacteria detection system that periodically captures coherent microscopy images of bacterial growth inside a 60-mm-diameter agar plate and analyses these time-lapsed holograms using deep neural networks for the rapid detection of bacterial growth and the classification of the corresponding species. The performance of our system was demonstrated by the rapid detection of Escherichia coli and total coliform bacteria (i.e., Klebsiella aerogenes and Klebsiella pneumoniae subsp. pneumoniae) in water samples, shortening the detection time by >12 h compared to the Environmental Protection Agency (EPA)-approved methods. Using the preincubation of samples in growth media, our system achieved a limit of detection (LOD) of ~1 colony forming unit (CFU)/L in ≤9 h of total test time. This platform is highly cost-effective (~$0.6/test) and has high-throughput with a scanning speed of 24 cm2/min over the entire plate surface, making it highly suitable for integration with the existing methods currently used for bacteria detection on agar plates. Powered by deep learning, this automated and cost-effective live bacteria detection platform can be transformative for a wide range of applications in microbiology by significantly reducing the detection time and automating the identification of colonies without labelling or the need for an expert.

8.
NPJ Digit Med ; 3: 76, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32509973

RESUMO

Sickle cell disease (SCD) is a major public health priority throughout much of the world, affecting millions of people. In many regions, particularly those in resource-limited settings, SCD is not consistently diagnosed. In Africa, where the majority of SCD patients reside, more than 50% of the 0.2-0.3 million children born with SCD each year will die from it; many of these deaths are in fact preventable with correct diagnosis and treatment. Here, we present a deep learning framework which can perform automatic screening of sickle cells in blood smears using a smartphone microscope. This framework uses two distinct, complementary deep neural networks. The first neural network enhances and standardizes the blood smear images captured by the smartphone microscope, spatially and spectrally matching the image quality of a laboratory-grade benchtop microscope. The second network acts on the output of the first image enhancement neural network and is used to perform the semantic segmentation between healthy and sickle cells within a blood smear. These segmented images are then used to rapidly determine the SCD diagnosis per patient. We blindly tested this mobile sickle cell detection method using blood smears from 96 unique patients (including 32 SCD patients) that were imaged by our smartphone microscope, and achieved ~98% accuracy, with an area-under-the-curve of 0.998. With its high accuracy, this mobile and cost-effective method has the potential to be used as a screening tool for SCD and other blood cell disorders in resource-limited settings.

9.
Light Sci Appl ; 9: 78, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32411363

RESUMO

Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time consuming, labour intensive, expensive and destructive to the specimen. Recently, the ability to virtually stain unlabelled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain-specific deep neural networks. Here, we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images, in which different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information as its input: (1) autofluorescence images of the label-free tissue sample and (2) a "digital staining matrix", which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin (H&E), Jones' silver stain, and Masson's trichrome stain. Using a single network, this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section, which is currently not feasible with standard histochemical staining methods.

10.
J Biophotonics ; 13(1): e201960036, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31483948

RESUMO

Pathological crystal identification is routinely practiced in rheumatology for diagnosing arthritis disease such as gout, and relies on polarized light microscopy as the gold standard method used by medical professionals. Here, we present a single-shot computational polarized light microscopy method that reconstructs the transmittance, retardance and slow-axis orientation of a birefringent sample using a single image captured with a pixelated-polarizer camera. This method is fast, simple-to-operate and compatible with all the existing standard microscopes without extensive or costly modifications. We demonstrated the success of our method by imaging three different types of crystals found in synovial fluid and reconstructed the birefringence information of these samples using a single image, without being affected by the orientation of individual crystals within the sample field-of-view. We believe this technique will provide improved sensitivity, specificity and speed, all at low cost, for clinical diagnosis of crystals found in synovial fluid and other bodily fluids.


Assuntos
Pirofosfato de Cálcio , Gota , Birrefringência , Gota/diagnóstico por imagem , Humanos , Microscopia de Polarização , Líquido Sinovial
11.
Light Sci Appl ; 9(1): 118, 2020 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-34244466

RESUMO

Early identification of pathogenic bacteria in food, water, and bodily fluids is very important and yet challenging, owing to sample complexities and large sample volumes that need to be rapidly screened. Existing screening methods based on plate counting or molecular analysis present various tradeoffs with regard to the detection time, accuracy/sensitivity, cost, and sample preparation complexity. Here, we present a computational live bacteria detection system that periodically captures coherent microscopy images of bacterial growth inside a 60-mm-diameter agar plate and analyses these time-lapsed holograms using deep neural networks for the rapid detection of bacterial growth and the classification of the corresponding species. The performance of our system was demonstrated by the rapid detection of Escherichia coli and total coliform bacteria (i.e., Klebsiella aerogenes and Klebsiella pneumoniae subsp. pneumoniae) in water samples, shortening the detection time by >12 h compared to the Environmental Protection Agency (EPA)-approved methods. Using the preincubation of samples in growth media, our system achieved a limit of detection (LOD) of ~1 colony forming unit (CFU)/L in ≤9 h of total test time. This platform is highly cost-effective (~$0.6/test) and has high-throughput with a scanning speed of 24 cm2/min over the entire plate surface, making it highly suitable for integration with the existing methods currently used for bacteria detection on agar plates. Powered by deep learning, this automated and cost-effective live bacteria detection platform can be transformative for a wide range of applications in microbiology by significantly reducing the detection time and automating the identification of colonies without labelling or the need for an expert.

12.
Light Sci Appl ; 8: 112, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31814969

RESUMO

Deep learning has been transformative in many fields, motivating the emergence of various optical computing architectures. Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks. Diffraction-based all-optical object recognition systems, designed through this framework and fabricated by 3D printing, have been reported to recognize hand-written digits and fashion products, demonstrating all-optical inference and generalization to sub-classes of data. These previous diffractive approaches employed monochromatic coherent light as the illumination source. Here, we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning. We experimentally validated the success of this broadband diffractive neural network architecture by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tuneable, single-passband and dual-passband spectral filters and (2) spatially controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy, broadband diffractive neural networks help us engineer the light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.

13.
Nat Methods ; 16(12): 1323-1331, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31686039

RESUMO

We demonstrate that a deep neural network can be trained to virtually refocus a two-dimensional fluorescence image onto user-defined three-dimensional (3D) surfaces within the sample. Using this method, termed Deep-Z, we imaged the neuronal activity of a Caenorhabditis elegans worm in 3D using a time sequence of fluorescence images acquired at a single focal plane, digitally increasing the depth-of-field by 20-fold without any axial scanning, additional hardware or a trade-off of imaging resolution and speed. Furthermore, we demonstrate that this approach can correct for sample drift, tilt and other aberrations, all digitally performed after the acquisition of a single fluorescence image. This framework also cross-connects different imaging modalities to each other, enabling 3D refocusing of a single wide-field fluorescence image to match confocal microscopy images acquired at different sample planes. Deep-Z has the potential to improve volumetric imaging speed while reducing challenges relating to sample drift, aberration and defocusing that are associated with standard 3D fluorescence microscopy.


Assuntos
Aprendizado Profundo , Microscopia de Fluorescência/métodos , Animais , Caenorhabditis elegans/ultraestrutura , Microscopia Confocal , Neurônios/ultraestrutura
14.
Light Sci Appl ; 8: 85, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31645929

RESUMO

Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance. Through data-driven approaches, these emerging techniques have overcome some of the challenges associated with existing holographic image reconstruction methods while also minimizing the hardware requirements of holography. These recent advances open up a myriad of new opportunities for the use of coherent imaging systems in biomedical and engineering research and related applications.

15.
Sci Rep ; 9(1): 12050, 2019 08 19.
Artigo em Inglês | MEDLINE | ID: mdl-31427691

RESUMO

We report resolution enhancement in scanning electron microscopy (SEM) images using a generative adversarial network. We demonstrate the veracity of this deep learning-based super-resolution technique by inferring unresolved features in low-resolution SEM images and comparing them with the accurately co-registered high-resolution SEM images of the same samples. Through spatial frequency analysis, we also report that our method generates images with frequency spectra matching higher resolution SEM images of the same fields-of-view. By using this technique, higher resolution SEM images can be taken faster, while also reducing both electron charging and damage to the samples.

16.
J Biophotonics ; 12(11): e201900107, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31309728

RESUMO

We report a framework based on a generative adversarial network that performs high-fidelity color image reconstruction using a single hologram of a sample that is illuminated simultaneously by light at three different wavelengths. The trained network learns to eliminate missing-phase-related artifacts, and generates an accurate color transformation for the reconstructed image. Our framework is experimentally demonstrated using lung and prostate tissue sections that are labeled with different histological stains. This framework is envisaged to be applicable to point-of-care histopathology and presents a significant improvement in the throughput of coherent microscopy systems given that only a single hologram of the specimen is required for accurate color imaging.


Assuntos
Aprendizado Profundo , Holografia , Processamento de Imagem Assistida por Computador/métodos , Microscopia , Cor , Humanos , Masculino , Próstata/diagnóstico por imagem
17.
Nat Biomed Eng ; 3(6): 466-477, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31142829

RESUMO

The histological analysis of tissue samples, widely used for disease diagnosis, involves lengthy and laborious tissue preparation. Here, we show that a convolutional neural network trained using a generative adversarial-network model can transform wide-field autofluorescence images of unlabelled tissue sections into images that are equivalent to the bright-field images of histologically stained versions of the same samples. A blind comparison, by board-certified pathologists, of this virtual staining method and standard histological staining using microscopic images of human tissue sections of the salivary gland, thyroid, kidney, liver and lung, and involving different types of stain, showed no major discordances. The virtual-staining method bypasses the typically labour-intensive and costly histological staining procedures, and could be used as a blueprint for the virtual staining of tissue images acquired with other label-free imaging modalities.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Coloração e Rotulagem , Algoritmos , Fluorescência , Humanos , Fígado/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Melaninas/metabolismo , Redes Neurais de Computação , Padrões de Referência
18.
Light Sci Appl ; 8: 25, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30854197

RESUMO

Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram. However, unlike a conventional bright-field microscopy image, the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects. Here, we demonstrate that cross-modality deep learning using a generative adversarial network (GAN) can endow holographic images of a sample volume with bright-field microscopy contrast, combining the volumetric imaging capability of holography with the speckle- and artifact-free image contrast of incoherent bright-field microscopy. We illustrate the performance of this "bright-field holography" method through the snapshot imaging of bioaerosols distributed in 3D, matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope. This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging, and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram, benefiting from the wave-propagation framework of holography.

19.
Sci Rep ; 9(1): 3926, 2019 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-30850721

RESUMO

We present a deep learning framework based on a generative adversarial network (GAN) to perform super-resolution in coherent imaging systems. We demonstrate that this framework can enhance the resolution of both pixel size-limited and diffraction-limited coherent imaging systems. The capabilities of this approach are experimentally validated by super-resolving complex-valued images acquired using a lensfree on-chip holographic microscope, the resolution of which was pixel size-limited. Using the same GAN-based approach, we also improved the resolution of a lens-based holographic imaging system that was limited in resolution by the numerical aperture of its objective lens. This deep learning-based super-resolution framework can be broadly applied to enhance the space-bandwidth product of coherent imaging systems using image data and convolutional neural networks, and provides a rapid, non-iterative method for solving inverse image reconstruction or enhancement problems in optics.


Assuntos
Aprendizado Profundo , Holografia/métodos , Aumento da Imagem/métodos , Microscopia/métodos , Desenho de Equipamento , Feminino , Holografia/instrumentação , Holografia/estatística & dados numéricos , Humanos , Pulmão/diagnóstico por imagem , Microscopia/instrumentação , Microscopia/estatística & dados numéricos , Redes Neurais de Computação , Teste de Papanicolaou/métodos , Teste de Papanicolaou/estatística & dados numéricos , Software , Esfregaço Vaginal/métodos , Esfregaço Vaginal/estatística & dados numéricos
20.
Light Sci Appl ; 8: 23, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30728961

RESUMO

Using a deep neural network, we demonstrate a digital staining technique, which we term PhaseStain, to transform the quantitative phase images (QPI) of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained. Through pairs of image data (QPI and the corresponding brightfield images, acquired after staining), we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin, kidney, and liver tissue, matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin, Jones' stain, and Masson's trichrome stain, respectively. This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general, by eliminating the need for histological staining, reducing sample preparation related costs and saving time. Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...