Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38713568

RESUMO

A major challenge in applying deep learning to medical imaging is the paucity of annotated data. This study explores the use of synthetic images for data augmentation to address the challenge of limited annotated data in colonoscopy lesion classification. We demonstrate that synthetic colonoscopy images generated by Generative Adversarial Network (GAN) inversion can be used as training data to improve polyp classification performance by deep learning models. We invert pairs of images with the same label to a semantically rich and disentangled latent space and manipulate latent representations to produce new synthetic images. These synthetic images maintain the same label as the input pairs. We perform image modality translation (style transfer) between white light and narrow-band imaging (NBI). We also generate realistic synthetic lesion images by interpolating between original training images to increase the variety of lesion shapes in the training dataset. Our experiments show that GAN inversion can produce multiple colonoscopy data augmentations that improve the downstream polyp classification performance by 2.7% in F1-score and 4.9% in sensitivity over other methods, including state-of-the-art data augmentation. Testing on unseen out-of-domain data also showcased an improvement of 2.9% in F1-score and 2.7% in sensitivity. This approach outperforms other colonoscopy data augmentation techniques and does not require re-training multiple generative models. It also effectively uses information from diverse public datasets, even those not specifically designed for the targeted downstream task, resulting in strong domain generalizability. Project code and model: https://github.com/DurrLab/GAN-Inversion.

2.
Bioengineering (Basel) ; 11(4)2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38671812

RESUMO

To investigate the potential of an affordable cryotherapy device for the accessible treatment of breast cancer, the performance of a novel carbon dioxide-based device was evaluated through both benchtop testing and an in vivo canine model. This novel device was quantitatively compared to a commercial device that utilizes argon gas as the cryogen. The thermal behavior of each device was characterized through calorimetry and by measuring the temperature profiles of iceballs generated in tissue phantoms. A 45 min treatment in a tissue phantom from the carbon dioxide device produced a 1.67 ± 0.06 cm diameter lethal isotherm that was equivalent to a 7 min treatment from the commercial argon-based device, which produced a 1.53 ± 0.15 cm diameter lethal isotherm. An in vivo treatment was performed with the carbon dioxide-based device in one spontaneously occurring canine mammary mass with two standard 10 min freezes. Following cryotherapy, this mass was surgically resected and analyzed for necrosis margins via histopathology. The histopathology margin of necrosis from the in vivo treatment with the carbon dioxide device at 14 days post-cryoablation was 1.57 cm. While carbon dioxide gas has historically been considered an impractical cryogen due to its low working pressure and high boiling point, this study shows that carbon dioxide-based cryotherapy may be equivalent to conventional argon-based cryotherapy in size of the ablation zone in a standard treatment time. The feasibility of the carbon dioxide device demonstrated in this study is an important step towards bringing accessible breast cancer treatment to women in low-resource settings.

3.
Med Image Anal ; 90: 102956, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37713764

RESUMO

Screening colonoscopy is an important clinical application for several 3D computer vision techniques, including depth estimation, surface reconstruction, and missing region detection. However, the development, evaluation, and comparison of these techniques in real colonoscopy videos remain largely qualitative due to the difficulty of acquiring ground truth data. In this work, we present a Colonoscopy 3D Video Dataset (C3VD) acquired with a high definition clinical colonoscope and high-fidelity colon models for benchmarking computer vision methods in colonoscopy. We introduce a novel multimodal 2D-3D registration technique to register optical video sequences with ground truth rendered views of a known 3D model. The different modalities are registered by transforming optical images to depth maps with a Generative Adversarial Network and aligning edge features with an evolutionary optimizer. This registration method achieves an average translation error of 0.321 millimeters and an average rotation error of 0.159 degrees in simulation experiments where error-free ground truth is available. The method also leverages video information, improving registration accuracy by 55.6% for translation and 60.4% for rotation compared to single frame registration. 22 short video sequences were registered to generate 10,015 total frames with paired ground truth depth, surface normals, optical flow, occlusion, six degree-of-freedom pose, coverage maps, and 3D models. The dataset also includes screening videos acquired by a gastroenterologist with paired ground truth pose and 3D surface models. The dataset and registration source code are available at https://durr.jhu.edu/C3VD.

4.
IEEE Trans Biomed Eng ; 70(3): 1053-1061, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36129868

RESUMO

OBJECTIVE: The diagnosis of urinary tract infection (UTI) currently requires precise specimen collection, handling infectious human waste, controlled urine storage, and timely transportation to modern laboratory equipment for analysis. Here we investigate holographic lens free imaging (LFI) to show its promise for enabling automatic urine analysis at the patient bedside. METHODS: We introduce an LFI system capable of resolving important urine clinical biomarkers such as red blood cells, white blood cells, crystals, and casts in 2 mm thick urine phantoms. RESULTS: This approach is sensitive to the particulate concentrations relevant for detecting several clinical urine abnormalities such as hematuria and pyuria, linearly correlating to ground truth hemacytometer measurements with R 2 = 0.9941 and R 2 = 0.9973, respectively. We show that LFI can estimate E. coli concentrations of 10 3 to 10 5 cells/mL by counting individual cells, and is sensitive to concentrations of 10 5 cells/mL to 10 8 cells/mL by analyzing hologram texture. Further, LFI measurements of blood cell concentrations are relatively insensitive to changes in bacteria concentrations of over seven orders of magnitude. Lastly, LFI reveals clear differences between UTI-positive and UTI-negative urine from human patients. CONCLUSION: LFI is sensitive to clinically-relevant concentrations of bacteria, blood cells, and other sediment in large urine volumes. SIGNIFICANCE: Together, these results show promise for LFI as a tool for urine screening, potentially offering early, point-of-care detection of UTI and other pathological processes.


Assuntos
Urinálise , Infecções Urinárias , Urinálise/instrumentação , Urinálise/métodos , Infecções Urinárias/diagnóstico por imagem , Testes Imediatos/normas , Urina/citologia , Urina/microbiologia , Holografia , Humanos , Sensibilidade e Especificidade
5.
Opt Express ; 30(19): 33433-33448, 2022 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-36242380

RESUMO

In-line lensless digital holography has great potential in multiple applications; however, reconstructing high-quality images from a single recorded hologram is challenging due to the loss of phase information. Typical reconstruction methods are based on solving a regularized inverse problem and work well under suitable image priors, but they are extremely sensitive to mismatches between the forward model and the actual imaging system. This paper aims to improve the robustness of such algorithms by introducing the adaptive sparse reconstruction method, ASR, which learns a properly constrained point spread function (PSF) directly from data, as opposed to solely relying on physics-based approximations of it. ASR jointly performs holographic reconstruction, PSF estimation, and phase retrieval in an unsupervised way by maximizing the sparsity of the reconstructed images. Like traditional methods, ASR uses the image formation model along with a sparsity prior, which, unlike recent deep learning approaches, allows for unsupervised reconstruction with as little as one sample. Experimental results in synthetic and real data show the advantages of ASR over traditional reconstruction methods, especially in cases where the theoretical PSF does not match that of the actual system.

6.
Sci Rep ; 12(1): 3714, 2022 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-35260664

RESUMO

The aim of this work is to evaluate the performance of a novel algorithm that combines dynamic wavefront aberrometry data and descriptors of the retinal image quality from objective autorefractor measurements to predict subjective refraction. We conducted a retrospective study of the prediction accuracy and precision of the novel algorithm compared to standard search-based retinal image quality optimization algorithms. Dynamic measurements from 34 adult patients were taken with a handheld wavefront autorefractor and static data was obtained with a high-end desktop wavefront aberrometer. The search-based algorithms did not significantly improve the results of the desktop system, while the dynamic approach was able to simultaneously reduce the standard deviation (up to a 15% for reduction of spherical equivalent power) and the mean bias error of the predictions (up to 80% reduction of spherical equivalent power) for the handheld aberrometer. These results suggest that dynamic retinal image analysis can substantially improve the accuracy and precision of the portable wavefront autorefractor relative to subjective refraction.


Assuntos
Erros de Refração , Adulto , Humanos , Procedimentos Cirúrgicos Oftalmológicos , Refração Ocular , Erros de Refração/diagnóstico , Estudos Retrospectivos , Testes Visuais
7.
Biomed Opt Express ; 12(5): 2575-2585, 2021 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-34123489

RESUMO

Oblique plane microscopy (OPM) enables high speed, volumetric fluorescence imaging through a single-objective geometry. While these advantages have positioned OPM as a valuable tool to probe biological questions in animal models, its potential for in vivo human imaging is largely unexplored due to its typical use with exogenous fluorescent dyes. Here we introduce a scattering-contrast oblique plane microscope (sOPM) and demonstrate label-free imaging of blood cells flowing through human capillaries in vivo. The sOPM illuminates a capillary bed in the ventral tongue with an oblique light sheet, and images side- and back- scattered signal from blood cells. By synchronizing the sOPM with a conventional capillaroscope, we acquire paired widefield and axial images of blood cells flowing through a capillary loop. The widefield capillaroscope image provides absorption contrast and confirms the presence of red blood cells (RBCs), while the sOPM image may aid in determining whether optical absorption gaps (OAGs) between RBCs have cellular or acellular composition. Further, we demonstrate consequential differences between fluorescence and scattering versions of OPM by imaging the same polystyrene beads sequentially with each technique. Lastly, we substantiate in vivo observations by imaging isolated red blood cells, white blood cells, and platelets in vitro using 3D agar phantoms. These results demonstrate a promising new avenue towards in vivo blood analysis.

8.
Lasers Surg Med ; 53(6): 748-775, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34015146

RESUMO

This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.


Assuntos
Aprendizado Profundo , Microscopia , Imagem Óptica , Óptica e Fotônica , Tomografia de Coerência Óptica
9.
Med Image Anal ; 71: 102058, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33930829

RESUMO

Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings, synthetically generated data as well as clinically in use conventional endoscope recording of the phantom colon with computed tomography(CT) scan ground truth. A Panda robotic arm, two commercially available capsule endoscopes, three conventional endoscopes with different camera properties, two high precision 3D scanners, and a CT scanner were employed to collect data from eight ex-vivo porcine gastrointestinal (GI)-tract organs and a silicone colon phantom model. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-datasets for colon, 12 sub-datasets for stomach, and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. To verify the applicability of this data for use with real clinical systems, we recorded a video sequence with a state-of-the-art colonoscope from a full representation silicon colon phantom. Synthetic capsule endoscopy frames from stomach, colon, and small intestine with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with a spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes that are commonly seen in endoscopic videos. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art: SC-SfMLearner, Monodepth2, and SfMLearner. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible as Supplementary Video 1.


Assuntos
Algoritmos , Endoscopia por Cápsula , Animais , Simulação por Computador , Imagens de Fantasmas , Suínos , Tomografia Computadorizada por Raios X
10.
IEEE Access ; 9: 631-640, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33747680

RESUMO

While data-driven approaches excel at many image analysis tasks, the performance of these approaches is often limited by a shortage of annotated data available for training. Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data, and that these representations can improve the performance of supervised tasks. Here, we demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images when compared to a fully-supervised baseline. We additionally benchmark improvements in domain adaptation and out-of-distribution detection, and demonstrate that semi-supervised learning outperforms supervised learning in both cases. In colonoscopy applications, these metrics are important given the skill required for endoscopic assessment of lesions, the wide variety of endoscopy systems in use, and the homogeneity that is typical of labeled datasets.

11.
Opt Lett ; 46(3): 673-676, 2021 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-33528438

RESUMO

Spatial frequency domain imaging can map tissue scattering and absorption properties over a wide field of view, making it useful for clinical applications such as wound assessment and surgical guidance. This technique has previously required the projection of fully characterized illumination patterns. Here, we show that random and unknown speckle illumination can be used to sample the modulation transfer function of tissues at known spatial frequencies, allowing the quantitative mapping of optical properties with simple laser diode illumination. We compute low- and high-spatial frequency response parameters from the local power spectral density for each pixel and use a lookup table to accurately estimate absorption and scattering coefficients in tissue phantoms, in vivo human hand, and ex vivo swine esophagus. Because speckle patterns can be generated over a large depth of field and field of view with simple coherent illumination, this approach may enable optical property mapping in new form-factors and applications, including endoscopy.

12.
Med Image Anal ; 70: 101990, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33609920

RESUMO

Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360∘ camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification. All of the code, pre-trained weights and created 3D organ models of the virtual environment with detailed instructions how to setup and use the environment are made publicly available at https://github.com/CapsuleEndoscope/VirtualCapsuleEndoscopy and a video demonstration can be seen in the supplementary videos (Video-I).


Assuntos
Endoscopia por Cápsula , Robótica , Algoritmos , Simulação por Computador , Endoscopia , Humanos , Redes Neurais de Computação
13.
J Biomed Opt ; 25(11)2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33251783

RESUMO

SIGNIFICANCE: Spatial frequency-domain imaging (SFDI) is a powerful technique for mapping tissue oxygen saturation over a wide field of view. However, current SFDI methods either require a sequence of several images with different illumination patterns or, in the case of single-snapshot optical properties (SSOP), introduce artifacts and sacrifice accuracy. AIM: We introduce OxyGAN, a data-driven, content-aware method to estimate tissue oxygenation directly from single structured-light images. APPROACH: OxyGAN is an end-to-end approach that uses supervised generative adversarial networks. Conventional SFDI is used to obtain ground truth tissue oxygenation maps for ex vivo human esophagi, in vivo hands and feet, and an in vivo pig colon sample under 659- and 851-nm sinusoidal illumination. We benchmark OxyGAN by comparing it with SSOP and a two-step hybrid technique that uses a previously developed deep learning model to predict optical properties followed by a physical model to calculate tissue oxygenation. RESULTS: When tested on human feet, cross-validated OxyGAN maps tissue oxygenation with an accuracy of 96.5%. When applied to sample types not included in the training set, such as human hands and pig colon, OxyGAN achieves a 93% accuracy, demonstrating robustness to various tissue types. On average, OxyGAN outperforms SSOP and a hybrid model in estimating tissue oxygenation by 24.9% and 24.7%, respectively. Finally, we optimize OxyGAN inference so that oxygenation maps are computed ∼10 times faster than previous work, enabling video-rate, 25-Hz imaging. CONCLUSIONS: Due to its rapid acquisition and processing speed, OxyGAN has the potential to enable real-time, high-fidelity tissue oxygenation mapping that may be useful for many clinical applications.


Assuntos
Aprendizado Profundo , Animais , Mãos , Pulmão , Suínos
14.
IEEE Trans Med Imaging ; 39(12): 4297-4309, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32795966

RESUMO

Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conventional endoscopy and is likely to do the same for capsule endoscopy. In this work, we propose and quantitatively validate a novel framework to learn a mapping from low-to-high-resolution endoscopic images. We combine conditional adversarial networks with a spatial attention block to improve the resolution by up to factors of 8× , 10× , 12× , respectively. Quantitative and qualitative studies demonstrate the superiority of EndoL2H over state-of-the-art deep super-resolution methods Deep Back-Projection Networks (DBPN), Deep Residual Channel Attention Networks (RCAN) and Super Resolution Generative Adversarial Network (SRGAN). Mean Opinion Score (MOS) tests were performed by 30 gastroenterologists qualitatively assess and confirm the clinical relevance of the approach. EndoL2H is generally applicable to any endoscopic capsule system and has the potential to improve diagnosis and better harness computational approaches for polyp detection and characterization. Our code and trained models are available at https://github.com/CapsuleEndoscope/EndoL2H.


Assuntos
Endoscopia por Cápsula
15.
Biomed Opt Express ; 11(6): 3091-3094, 2020 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-32637243

RESUMO

This feature issue of Biomedical Optics Express presents a cross-section of interesting and emerging work of relevance to optical technologies in low-resource settings. In particular, the technologies described here aim to address challenges to meeting healthcare needs in resource-constrained environments, including in rural and underserved areas. This collection of 18 papers includes papers on both optical system design and image analysis, with applications demonstrated for ex vivo and in vivo use. All together, these works portray the importance of global health research to the scientific community and the role that optics can play in addressing some of the world's most pressing healthcare challenges.

16.
Opt Express ; 28(13): 19641-19654, 2020 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-32672237

RESUMO

Poor access to eye care is a major global challenge that could be ameliorated by low-cost, portable, and easy-to-use diagnostic technologies. Diffuser-based imaging has the potential to enable inexpensive, compact optical systems that can reconstruct a focused image of an object over a range of defocus errors. Here, we present a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Compared to existing diffuser-imager architectures, our system features an infinite-conjugate design by relaying the ocular lens onto the diffuser. This offers shift-invariance across a wide field-of-view (FOV) and an invariant magnification across an extended depth range. Experimentally, we demonstrate fundus image reconstruction over a 33° FOV and robustness to ±4D refractive error using a constant point-spread-function. Combined with diffuser-based wavefront sensing, this technology could enable combined ocular aberrometry and funduscopic screening through a single diffuser sensor.


Assuntos
Diagnóstico por Imagem/instrumentação , Processamento de Imagem Assistida por Computador/instrumentação , Oftalmoscópios , Retina/diagnóstico por imagem , Desenho de Equipamento , Humanos , Luz , Modelos Teóricos
17.
Biomed Opt Express ; 11(5): 2373-2382, 2020 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-32499930

RESUMO

We present a non-invasive, label-free method of imaging blood cells flowing through human capillaries in vivo using oblique back-illumination capillaroscopy (OBC). Green light illumination allows simultaneous phase and absorption contrast, enhancing the ability to distinguish red and white blood cells. Single-sided illumination through the objective lens enables 200 Hz imaging with close illumination-detection separation and a simplified setup. Phase contrast is optimized when the illumination axis is offset from the detection axis by approximately 225 µm when imaging ∼80 µm deep in phantoms and human ventral tongue. We demonstrate high-speed imaging of individual red blood cells, white blood cells with sub-cellular detail, and platelets flowing through capillaries and vessels in human tongue. A custom pneumatic cap placed over the objective lens stabilizes the field of view, enabling longitudinal imaging of a single capillary for up to seven minutes. We present high-quality images of blood cells in individuals with Fitzpatrick skin phototypes II, IV, and VI, showing that the technique is robust to high peripheral melanin concentration. The signal quality, speed, simplicity, and robustness of this approach underscores its potential for non-invasive blood cell counting.

18.
Biomed Opt Express ; 11(5): 2560-2569, 2020 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-32499943

RESUMO

Targeted vector control strategies aiming to prevent mosquito borne disease are severely limited by the logistical burden of vector surveillance, the monitoring of an area to understand mosquito species composition, abundance and spatial distribution. We describe development of an imaging system within a mosquito trap to remotely identify caught mosquitoes, including selection of the image resolution requirement, a design to meet that specification, and evaluation of the system. The necessary trap image resolution was determined to be 16 lp/mm, or 31.25um. An optics system meeting these specifications was implemented in a BG-GAT mosquito trap. Its ability to provide images suitable for accurate specimen identification was evaluated by providing entomologists with images of individual specimens, taken either with a microscope or within the trap and asking them to provide a species identification, then comparing these results. No difference in identification accuracy between the microscope and the trap images was found; however, due to limitations of human species classification from a single image, the system is only able to provide accurate genus-level mosquito classification. Further integration of this system with machine learning computer vision algorithms has the potential to provide near-real time mosquito surveillance data at the species level.

19.
Biomed Opt Express ; 11(4): 2268-2276, 2020 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-32341882

RESUMO

Quantification of optical absorption gaps in nailfold capillaries has recently shown promise as a non-invasive technique for neutropenia screening. Here we demonstrate a low-cost, portable attachment to a mobile phone that can resolve optical absorption gaps in nailfold capillaries using a reverse lens technique and oblique 520nm illumination. Resolution <4µm within a 1mm2 on-axis region is demonstrated, and wide field of view (3.5mm × 4.8mm) imaging is achieved with resolution <6µm in the periphery. Optical absorption gaps (OAGs) are visible in superficial capillary loops of a healthy human participant by an ∼8-fold difference in contrast-to-noise ratio with respect to red blood cell absorption contrast. High speed video capillaroscopy up to 240 frames per second (fps) is possible, though 60fps is sufficient to resolve an average frequency of 37 OAGs/minute passing through nailfold capillaries. The simplicity and portability of this technique may enable the development of an effective non-invasive tool for white blood cell screening in point-of-care and global health settings.

20.
IEEE Trans Med Imaging ; 39(6): 1988-1999, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31899416

RESUMO

We present a deep learning framework for wide-field, content-aware estimation of absorption and scattering coefficients of tissues, called Generative Adversarial Network Prediction of Optical Properties (GANPOP). Spatial frequency domain imaging is used to obtain ground-truth optical properties at 660 nm from in vivo human hands and feet, freshly resected human esophagectomy samples, and homogeneous tissue phantoms. Images of objects with either flat-field or structured illumination are paired with registered optical property maps and are used to train conditional generative adversarial networks that estimate optical properties from a single input image. We benchmark this approach by comparing GANPOP to a single-snapshot optical property (SSOP) technique, using a normalized mean absolute error (NMAE) metric. In human gastrointestinal specimens, GANPOP with a single structured-light input image estimates the reduced scattering and absorption coefficients with 60% higher accuracy than SSOP while GANPOP with a single flat-field illumination image achieves similar accuracy to SSOP. When applied to both in vivo and ex vivo swine tissues, a GANPOP model trained solely on structured-illumination images of human specimens and phantoms estimates optical properties with approximately 46% improvement over SSOP, indicating adaptability to new, unseen tissue types. Given a training set that appropriately spans the target domain, GANPOP has the potential to enable rapid and accurate wide-field measurements of optical properties.


Assuntos
Imagens de Fantasmas , Animais , Suínos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...