Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Opt Express ; 30(19): 33433-33448, 2022 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-36242380

RESUMEN

In-line lensless digital holography has great potential in multiple applications; however, reconstructing high-quality images from a single recorded hologram is challenging due to the loss of phase information. Typical reconstruction methods are based on solving a regularized inverse problem and work well under suitable image priors, but they are extremely sensitive to mismatches between the forward model and the actual imaging system. This paper aims to improve the robustness of such algorithms by introducing the adaptive sparse reconstruction method, ASR, which learns a properly constrained point spread function (PSF) directly from data, as opposed to solely relying on physics-based approximations of it. ASR jointly performs holographic reconstruction, PSF estimation, and phase retrieval in an unsupervised way by maximizing the sparsity of the reconstructed images. Like traditional methods, ASR uses the image formation model along with a sparsity prior, which, unlike recent deep learning approaches, allows for unsupervised reconstruction with as little as one sample. Experimental results in synthetic and real data show the advantages of ASR over traditional reconstruction methods, especially in cases where the theoretical PSF does not match that of the actual system.

2.
Opt Lett ; 46(3): 673-676, 2021 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-33528438

RESUMEN

Spatial frequency domain imaging can map tissue scattering and absorption properties over a wide field of view, making it useful for clinical applications such as wound assessment and surgical guidance. This technique has previously required the projection of fully characterized illumination patterns. Here, we show that random and unknown speckle illumination can be used to sample the modulation transfer function of tissues at known spatial frequencies, allowing the quantitative mapping of optical properties with simple laser diode illumination. We compute low- and high-spatial frequency response parameters from the local power spectral density for each pixel and use a lookup table to accurately estimate absorption and scattering coefficients in tissue phantoms, in vivo human hand, and ex vivo swine esophagus. Because speckle patterns can be generated over a large depth of field and field of view with simple coherent illumination, this approach may enable optical property mapping in new form-factors and applications, including endoscopy.

3.
Lasers Surg Med ; 53(6): 748-775, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34015146

RESUMEN

This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.


Asunto(s)
Aprendizaje Profundo , Microscopía , Imagen Óptica , Óptica y Fotónica , Tomografía de Coherencia Óptica
4.
Opt Express ; 28(13): 19641-19654, 2020 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-32672237

RESUMEN

Poor access to eye care is a major global challenge that could be ameliorated by low-cost, portable, and easy-to-use diagnostic technologies. Diffuser-based imaging has the potential to enable inexpensive, compact optical systems that can reconstruct a focused image of an object over a range of defocus errors. Here, we present a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Compared to existing diffuser-imager architectures, our system features an infinite-conjugate design by relaying the ocular lens onto the diffuser. This offers shift-invariance across a wide field-of-view (FOV) and an invariant magnification across an extended depth range. Experimentally, we demonstrate fundus image reconstruction over a 33° FOV and robustness to ±4D refractive error using a constant point-spread-function. Combined with diffuser-based wavefront sensing, this technology could enable combined ocular aberrometry and funduscopic screening through a single diffuser sensor.


Asunto(s)
Diagnóstico por Imagen/instrumentación , Procesamiento de Imagen Asistido por Computador/instrumentación , Oftalmoscopios , Retina/diagnóstico por imagen , Diseño de Equipo , Humanos , Luz , Modelos Teóricos
5.
Optom Vis Sci ; 96(10): 726-732, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31592955

RESUMEN

SIGNIFICANCE: There is a critical need for tools that increase the accessibility of eye care to address the most common cause of vision impairment: uncorrected refractive errors. This work assesses the performance of an affordable autorefractor, which could help reduce the burden of this health care problem in low-resource communities. PURPOSE: The purpose of this study was to validate the commercial version of a portable wavefront autorefractor for measuring refractive errors. METHODS: Refraction was performed without cycloplegia using (1) a standard clinical procedure consisting of an objective measurement with a desktop autorefractor followed by subjective refraction (SR) and (2) with the handheld autorefractor. Agreement between both methods was evaluated using Bland-Altman analysis and by comparing the visual acuity (VA) with trial frames set to the resulting measurements. RESULTS: The study was conducted on 54 patients (33.9 ± 14.1 years of age) with a spherical equivalent (M) refraction determined by SR ranging from -7.25 to 4.25 D (mean ± SD, -0.93 ± 1.95 D). Mean differences between the portable autorefractor and SR were 0.09 ± 0.39, -0.06 ± 0.13, and 0.02 ± 0.12 D for M, J0, and J45, respectively. The device agreed within 0.5 D of SR in 87% of the eyes for spherical equivalent power. The average VAs achieved from trial lenses set to the wavefront autorefractor and SR results were 0.02 ± 0.015 and 0.015 ± 0.042 logMAR units, respectively. Visual acuity resulting from correction based on the device was the same as or better than that achieved by SR in 87% of the eyes. CONCLUSIONS: This study found excellent agreement between the measurements obtained with the portable autorefractor and the prescriptions based on SR and only small differences between the VA achieved by either method.


Asunto(s)
Aberrometría/instrumentación , Errores de Refracción/diagnóstico , Aberrometría/economía , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Evaluación de Resultado en la Atención de Salud , Presbiopía/fisiopatología , Refracción Ocular/fisiología , Errores de Refracción/fisiopatología , Reproducibilidad de los Resultados , Agudeza Visual/fisiología , Adulto Joven
6.
Annu Rev Biomed Eng ; 16: 131-53, 2014 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-24905874

RESUMEN

Worldwide, more than one billion people suffer from poor vision because they do not have the eyeglasses they need. Their uncorrected refractive errors are a major cause of global disability and drastically reduce productivity, educational opportunities, and overall quality of life. The problem persists most prevalently in low-resource settings, even though prescription eyeglasses serve as a simple, effective, and largely affordable solution. In this review, we discuss barriers to obtaining, and approaches for providing, refractive eye care. We also highlight emerging technologies that are being developed to increase the accessibility of eye care. Finally, we describe opportunities that exist for engineers to develop new solutions to positively impact the diagnosis and treatment of correctable refractive errors in low-resource settings.


Asunto(s)
Anteojos , Refracción Ocular , Errores de Refracción/terapia , Baja Visión/terapia , Salud Global , Accesibilidad a los Servicios de Salud , Humanos , Pobreza , Presbiopía/epidemiología , Presbiopía/terapia , Prevalencia , Errores de Refracción/epidemiología , Retina/fisiología , Retina/fisiopatología , Retinoscopía/métodos , Baja Visión/epidemiología , Visión Ocular
7.
Optom Vis Sci ; 92(12): 1140-7, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26580271

RESUMEN

PURPOSE: To introduce a novel autorefractor design that is intended to be manufacturable at low cost and evaluate its performance in measuring refractive errors. METHODS: We developed a handheld, open-view autorefractor (the "QuickSee" [QS]) that uses a simplified approach to wavefront sensing that forgoes moving parts and expensive components. Adult subjects (n = 41) were recruited to undergo noncycloplegic refraction with three methods: (1) a QS prototype, (2) a Grand Seiko WR-5100K (GS) autorefractor, and (3) subjective refraction (SR). Agreements between the QS and GS were evaluated using a Bland-Altman analysis. The accuracy of both autorefractors was evaluated using SR as the clinical gold standard. RESULTS: The spherical equivalent powers measured from both autorefractors correlate well with SR, with identical correlation coefficients of r = 0.97. Both autorefractors also agree well with each other, with a spherical equivalent power 95% confidence interval of ±0.84 diopters (D). The difference between the accuracy of each objective device is not statistically significant for any component of the power vector (p = 0.55, 0.41, and 0.18, for M, J0, and J45, respectively). The spherical and cylindrical powers measured by the GS agree within 0.25 D of the SR in 49 and 82% of the eyes, respectively, whereas the spherical and cylindrical powers measured by the QS agree within 0.25 D of the SR in 74 and 87% of the eyes, respectively. CONCLUSIONS: The prototype autorefractor exhibits equivalent performance to the GS autorefractor in matching power vectors measured by SR.


Asunto(s)
Diseño de Equipo , Errores de Refracción/diagnóstico , Pruebas de Visión/instrumentación , Adulto , Aberración de Frente de Onda Corneal/diagnóstico , Femenino , Humanos , Masculino , Persona de Mediana Edad , Refracción Ocular/fisiología , Reproducibilidad de los Resultados , Visión Binocular/fisiología , Adulto Joven
9.
Artículo en Inglés | MEDLINE | ID: mdl-38713568

RESUMEN

A major challenge in applying deep learning to medical imaging is the paucity of annotated data. This study explores the use of synthetic images for data augmentation to address the challenge of limited annotated data in colonoscopy lesion classification. We demonstrate that synthetic colonoscopy images generated by Generative Adversarial Network (GAN) inversion can be used as training data to improve polyp classification performance by deep learning models. We invert pairs of images with the same label to a semantically rich and disentangled latent space and manipulate latent representations to produce new synthetic images. These synthetic images maintain the same label as the input pairs. We perform image modality translation (style transfer) between white light and narrow-band imaging (NBI). We also generate realistic synthetic lesion images by interpolating between original training images to increase the variety of lesion shapes in the training dataset. Our experiments show that GAN inversion can produce multiple colonoscopy data augmentations that improve the downstream polyp classification performance by 2.7% in F1-score and 4.9% in sensitivity over other methods, including state-of-the-art data augmentation. Testing on unseen out-of-domain data also showcased an improvement of 2.9% in F1-score and 2.7% in sensitivity. This approach outperforms other colonoscopy data augmentation techniques and does not require re-training multiple generative models. It also effectively uses information from diverse public datasets, even those not specifically designed for the targeted downstream task, resulting in strong domain generalizability. Project code and model: https://github.com/DurrLab/GAN-Inversion.

10.
Bioengineering (Basel) ; 11(4)2024 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-38671812

RESUMEN

To investigate the potential of an affordable cryotherapy device for the accessible treatment of breast cancer, the performance of a novel carbon dioxide-based device was evaluated through both benchtop testing and an in vivo canine model. This novel device was quantitatively compared to a commercial device that utilizes argon gas as the cryogen. The thermal behavior of each device was characterized through calorimetry and by measuring the temperature profiles of iceballs generated in tissue phantoms. A 45 min treatment in a tissue phantom from the carbon dioxide device produced a 1.67 ± 0.06 cm diameter lethal isotherm that was equivalent to a 7 min treatment from the commercial argon-based device, which produced a 1.53 ± 0.15 cm diameter lethal isotherm. An in vivo treatment was performed with the carbon dioxide-based device in one spontaneously occurring canine mammary mass with two standard 10 min freezes. Following cryotherapy, this mass was surgically resected and analyzed for necrosis margins via histopathology. The histopathology margin of necrosis from the in vivo treatment with the carbon dioxide device at 14 days post-cryoablation was 1.57 cm. While carbon dioxide gas has historically been considered an impractical cryogen due to its low working pressure and high boiling point, this study shows that carbon dioxide-based cryotherapy may be equivalent to conventional argon-based cryotherapy in size of the ablation zone in a standard treatment time. The feasibility of the carbon dioxide device demonstrated in this study is an important step towards bringing accessible breast cancer treatment to women in low-resource settings.

11.
Comput Biol Med ; 177: 108677, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38833800

RESUMEN

Intracranial pressure (ICP) is commonly monitored to guide treatment in patients with serious brain disorders such as traumatic brain injury and stroke. Established methods to assess ICP are resource intensive and highly invasive. We hypothesized that ICP waveforms can be computed noninvasively from three extracranial physiological waveforms routinely acquired in the Intensive Care Unit (ICU): arterial blood pressure (ABP), photoplethysmography (PPG), and electrocardiography (ECG). We evaluated over 600 h of high-frequency (125 Hz) simultaneously acquired ICP, ABP, ECG, and PPG waveform data in 10 patients admitted to the ICU with critical brain disorders. The data were segmented in non-overlapping 10-s windows, and ABP, ECG, and PPG waveforms were used to train deep learning (DL) models to re-create concurrent ICP. The predictive performance of six different DL models was evaluated in single- and multi-patient iterations. The mean average error (MAE) ± SD of the best-performing models was 1.34 ± 0.59 mmHg in the single-patient and 5.10 ± 0.11 mmHg in the multi-patient analysis. Ablation analysis was conducted to compare contributions from single physiologic sources and demonstrated statistically indistinguishable performances across the top DL models for each waveform (MAE±SD 6.33 ± 0.73, 6.65 ± 0.96, and 7.30 ± 1.28 mmHg, respectively, for ECG, PPG, and ABP; p = 0.42). Results support the preliminary feasibility and accuracy of DL-enabled continuous noninvasive ICP waveform computation using extracranial physiological waveforms. With refinement and further validation, this method could represent a safer and more accessible alternative to invasive ICP, enabling assessment and treatment in low-resource settings.


Asunto(s)
Aprendizaje Profundo , Electrocardiografía , Unidades de Cuidados Intensivos , Presión Intracraneal , Fotopletismografía , Procesamiento de Señales Asistido por Computador , Humanos , Presión Intracraneal/fisiología , Masculino , Femenino , Persona de Mediana Edad , Adulto , Fotopletismografía/métodos , Electrocardiografía/métodos , Anciano , Monitoreo Fisiológico/métodos
12.
Ann Plast Surg ; 71(3): 308-15, 2013 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-23945533

RESUMEN

INTRODUCTION: Although various methods exist for monitoring flaps during reconstructive surgery, surgeons primarily rely on assessment of clinical judgment. Early detection of vascular complications improves rate of flap salvage. Spatial frequency domain imaging (SFDI) is a promising new technology that provides oxygenation images over a large field of view. The goal of this clinical pilot study is to use SFDI in perforator flap breast reconstruction. METHODS: Three women undergoing unilateral breast reconstruction after mastectomy were enrolled for our study. The SFDI system was deployed in the operating room, and images acquired over the course of the operation. Time points included images of each hemiabdominal skin flap before elevation, the selected flap after perforator dissection, and after microsurgical transfer. RESULTS: Spatial frequency domain imaging was able to measure tissue oxyhemoglobin concentration (ctO2Hb), tissue deoxyhemoglobin concentration, and tissue oxygen saturation (stO2). Images were created for each metric to monitor flap status and the results quantified throughout the various time points of the procedure. For 2 of 3 patients, the chosen flap had a higher ctO2Hb and stO2. For 1 patient, the chosen flap had lower ctO2Hb and stO2. There were no perfusion deficits observed based on SFDI and clinical follow-up. CONCLUSIONS: The results of our initial human pilot study suggest that SFDI has the potential to provide intraoperative oxygenation images in real-time during surgery. With the use of this technology, surgeons can obtain tissue oxygenation and hemoglobin concentration maps to assist in intraoperative planning; this can potentially prevent complications and improve clinical outcome.


Asunto(s)
Mamoplastia/métodos , Monitoreo Intraoperatorio/métodos , Colgajo Perforante/irrigación sanguínea , Espectroscopía Infrarroja Corta/métodos , Adulto , Anciano , Biomarcadores/metabolismo , Femenino , Estudios de Seguimiento , Hemoglobinas/metabolismo , Humanos , Mastectomía , Persona de Mediana Edad , Monitoreo Intraoperatorio/instrumentación , Evaluación de Resultado en la Atención de Salud , Oxígeno/metabolismo , Oxihemoglobinas/metabolismo , Colgajo Perforante/trasplante , Proyectos Piloto , Espectroscopía Infrarroja Corta/instrumentación
13.
Med Image Anal ; 90: 102956, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37713764

RESUMEN

Screening colonoscopy is an important clinical application for several 3D computer vision techniques, including depth estimation, surface reconstruction, and missing region detection. However, the development, evaluation, and comparison of these techniques in real colonoscopy videos remain largely qualitative due to the difficulty of acquiring ground truth data. In this work, we present a Colonoscopy 3D Video Dataset (C3VD) acquired with a high definition clinical colonoscope and high-fidelity colon models for benchmarking computer vision methods in colonoscopy. We introduce a novel multimodal 2D-3D registration technique to register optical video sequences with ground truth rendered views of a known 3D model. The different modalities are registered by transforming optical images to depth maps with a Generative Adversarial Network and aligning edge features with an evolutionary optimizer. This registration method achieves an average translation error of 0.321 millimeters and an average rotation error of 0.159 degrees in simulation experiments where error-free ground truth is available. The method also leverages video information, improving registration accuracy by 55.6% for translation and 60.4% for rotation compared to single frame registration. 22 short video sequences were registered to generate 10,015 total frames with paired ground truth depth, surface normals, optical flow, occlusion, six degree-of-freedom pose, coverage maps, and 3D models. The dataset also includes screening videos acquired by a gastroenterologist with paired ground truth pose and 3D surface models. The dataset and registration source code are available at https://durr.jhu.edu/C3VD.

14.
IEEE Trans Biomed Eng ; 70(3): 1053-1061, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36129868

RESUMEN

OBJECTIVE: The diagnosis of urinary tract infection (UTI) currently requires precise specimen collection, handling infectious human waste, controlled urine storage, and timely transportation to modern laboratory equipment for analysis. Here we investigate holographic lens free imaging (LFI) to show its promise for enabling automatic urine analysis at the patient bedside. METHODS: We introduce an LFI system capable of resolving important urine clinical biomarkers such as red blood cells, white blood cells, crystals, and casts in 2 mm thick urine phantoms. RESULTS: This approach is sensitive to the particulate concentrations relevant for detecting several clinical urine abnormalities such as hematuria and pyuria, linearly correlating to ground truth hemacytometer measurements with R 2 = 0.9941 and R 2 = 0.9973, respectively. We show that LFI can estimate E. coli concentrations of 10 3 to 10 5 cells/mL by counting individual cells, and is sensitive to concentrations of 10 5 cells/mL to 10 8 cells/mL by analyzing hologram texture. Further, LFI measurements of blood cell concentrations are relatively insensitive to changes in bacteria concentrations of over seven orders of magnitude. Lastly, LFI reveals clear differences between UTI-positive and UTI-negative urine from human patients. CONCLUSION: LFI is sensitive to clinically-relevant concentrations of bacteria, blood cells, and other sediment in large urine volumes. SIGNIFICANCE: Together, these results show promise for LFI as a tool for urine screening, potentially offering early, point-of-care detection of UTI and other pathological processes.


Asunto(s)
Urinálisis , Infecciones Urinarias , Urinálisis/instrumentación , Urinálisis/métodos , Infecciones Urinarias/diagnóstico por imagen , Pruebas en el Punto de Atención/normas , Orina/citología , Orina/microbiología , Holografía , Humanos , Sensibilidad y Especificidad
15.
Sci Rep ; 12(1): 3714, 2022 03 08.
Artículo en Inglés | MEDLINE | ID: mdl-35260664

RESUMEN

The aim of this work is to evaluate the performance of a novel algorithm that combines dynamic wavefront aberrometry data and descriptors of the retinal image quality from objective autorefractor measurements to predict subjective refraction. We conducted a retrospective study of the prediction accuracy and precision of the novel algorithm compared to standard search-based retinal image quality optimization algorithms. Dynamic measurements from 34 adult patients were taken with a handheld wavefront autorefractor and static data was obtained with a high-end desktop wavefront aberrometer. The search-based algorithms did not significantly improve the results of the desktop system, while the dynamic approach was able to simultaneously reduce the standard deviation (up to a 15% for reduction of spherical equivalent power) and the mean bias error of the predictions (up to 80% reduction of spherical equivalent power) for the handheld aberrometer. These results suggest that dynamic retinal image analysis can substantially improve the accuracy and precision of the portable wavefront autorefractor relative to subjective refraction.


Asunto(s)
Errores de Refracción , Adulto , Humanos , Procedimientos Quirúrgicos Oftalmológicos , Refracción Ocular , Errores de Refracción/diagnóstico , Estudios Retrospectivos , Pruebas de Visión
16.
Nat Methods ; 5(6): 531-3, 2008 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-18408725

RESUMEN

A thorough understanding of nerve regeneration in Caenorhabditis elegans requires performing femtosecond laser nanoaxotomy while minimally affecting the worm. We present a microfluidic device that fulfills such criteria and can easily be automated to enable high-throughput genetic and pharmacological screenings. Using the 'nanoaxotomy' chip, we discovered that axonal regeneration occurs much faster than previously described, and notably, the distal fragment of the severed axon regrows in the absence of anesthetics.


Asunto(s)
Axones/patología , Axotomía/métodos , Nanotecnología/métodos , Regeneración Nerviosa , Animales , Conducta Animal , Caenorhabditis elegans , Proteínas de Caenorhabditis elegans/metabolismo , Diseño de Equipo , Técnicas Analíticas Microfluídicas , Microfluídica , Modelos Biológicos , Factores de Tiempo
17.
Appl Opt ; 50(16): 2376-82, 2011 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-21629316

RESUMEN

We present a fast-updating Lissajous image reconstruction methodology that uses an increased image frame rate beyond the pattern repeat rate generally used in conventional Lissajous image reconstruction methods. The fast display rate provides increased dynamic information and reduced motion blur, as compared to conventional Lissajous reconstruction, at the cost of single-frame pixel density. Importantly, this method does not discard any information from the conventional Lissajous image reconstruction, and frames from the complete Lissajous pattern can be displayed simultaneously. We present the theoretical background for this image reconstruction methodology along with images and video taken using the algorithm in a custom-built miniaturized multiphoton microscopy system.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía de Fluorescencia por Excitación Multifotónica/métodos , Modelos Teóricos
18.
IEEE Access ; 9: 631-640, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33747680

RESUMEN

While data-driven approaches excel at many image analysis tasks, the performance of these approaches is often limited by a shortage of annotated data available for training. Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data, and that these representations can improve the performance of supervised tasks. Here, we demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images when compared to a fully-supervised baseline. We additionally benchmark improvements in domain adaptation and out-of-distribution detection, and demonstrate that semi-supervised learning outperforms supervised learning in both cases. In colonoscopy applications, these metrics are important given the skill required for endoscopic assessment of lesions, the wide variety of endoscopy systems in use, and the homogeneity that is typical of labeled datasets.

19.
Biomed Opt Express ; 12(5): 2575-2585, 2021 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-34123489

RESUMEN

Oblique plane microscopy (OPM) enables high speed, volumetric fluorescence imaging through a single-objective geometry. While these advantages have positioned OPM as a valuable tool to probe biological questions in animal models, its potential for in vivo human imaging is largely unexplored due to its typical use with exogenous fluorescent dyes. Here we introduce a scattering-contrast oblique plane microscope (sOPM) and demonstrate label-free imaging of blood cells flowing through human capillaries in vivo. The sOPM illuminates a capillary bed in the ventral tongue with an oblique light sheet, and images side- and back- scattered signal from blood cells. By synchronizing the sOPM with a conventional capillaroscope, we acquire paired widefield and axial images of blood cells flowing through a capillary loop. The widefield capillaroscope image provides absorption contrast and confirms the presence of red blood cells (RBCs), while the sOPM image may aid in determining whether optical absorption gaps (OAGs) between RBCs have cellular or acellular composition. Further, we demonstrate consequential differences between fluorescence and scattering versions of OPM by imaging the same polystyrene beads sequentially with each technique. Lastly, we substantiate in vivo observations by imaging isolated red blood cells, white blood cells, and platelets in vitro using 3D agar phantoms. These results demonstrate a promising new avenue towards in vivo blood analysis.

20.
Med Image Anal ; 70: 101990, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33609920

RESUMEN

Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360∘ camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification. All of the code, pre-trained weights and created 3D organ models of the virtual environment with detailed instructions how to setup and use the environment are made publicly available at https://github.com/CapsuleEndoscope/VirtualCapsuleEndoscopy and a video demonstration can be seen in the supplementary videos (Video-I).


Asunto(s)
Endoscopía Capsular , Robótica , Algoritmos , Simulación por Computador , Endoscopía , Humanos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA