Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
1.
Comput Biol Med ; 177: 108677, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38833800

RESUMEN

Intracranial pressure (ICP) is commonly monitored to guide treatment in patients with serious brain disorders such as traumatic brain injury and stroke. Established methods to assess ICP are resource intensive and highly invasive. We hypothesized that ICP waveforms can be computed noninvasively from three extracranial physiological waveforms routinely acquired in the Intensive Care Unit (ICU): arterial blood pressure (ABP), photoplethysmography (PPG), and electrocardiography (ECG). We evaluated over 600 h of high-frequency (125 Hz) simultaneously acquired ICP, ABP, ECG, and PPG waveform data in 10 patients admitted to the ICU with critical brain disorders. The data were segmented in non-overlapping 10-s windows, and ABP, ECG, and PPG waveforms were used to train deep learning (DL) models to re-create concurrent ICP. The predictive performance of six different DL models was evaluated in single- and multi-patient iterations. The mean average error (MAE) ± SD of the best-performing models was 1.34 ± 0.59 mmHg in the single-patient and 5.10 ± 0.11 mmHg in the multi-patient analysis. Ablation analysis was conducted to compare contributions from single physiologic sources and demonstrated statistically indistinguishable performances across the top DL models for each waveform (MAE±SD 6.33 ± 0.73, 6.65 ± 0.96, and 7.30 ± 1.28 mmHg, respectively, for ECG, PPG, and ABP; p = 0.42). Results support the preliminary feasibility and accuracy of DL-enabled continuous noninvasive ICP waveform computation using extracranial physiological waveforms. With refinement and further validation, this method could represent a safer and more accessible alternative to invasive ICP, enabling assessment and treatment in low-resource settings.


Asunto(s)
Aprendizaje Profundo , Electrocardiografía , Unidades de Cuidados Intensivos , Presión Intracraneal , Fotopletismografía , Procesamiento de Señales Asistido por Computador , Humanos , Presión Intracraneal/fisiología , Masculino , Femenino , Persona de Mediana Edad , Adulto , Fotopletismografía/métodos , Electrocardiografía/métodos , Anciano , Monitoreo Fisiológico/métodos
2.
Artículo en Inglés | MEDLINE | ID: mdl-38713568

RESUMEN

A major challenge in applying deep learning to medical imaging is the paucity of annotated data. This study explores the use of synthetic images for data augmentation to address the challenge of limited annotated data in colonoscopy lesion classification. We demonstrate that synthetic colonoscopy images generated by Generative Adversarial Network (GAN) inversion can be used as training data to improve polyp classification performance by deep learning models. We invert pairs of images with the same label to a semantically rich and disentangled latent space and manipulate latent representations to produce new synthetic images. These synthetic images maintain the same label as the input pairs. We perform image modality translation (style transfer) between white light and narrow-band imaging (NBI). We also generate realistic synthetic lesion images by interpolating between original training images to increase the variety of lesion shapes in the training dataset. Our experiments show that GAN inversion can produce multiple colonoscopy data augmentations that improve the downstream polyp classification performance by 2.7% in F1-score and 4.9% in sensitivity over other methods, including state-of-the-art data augmentation. Testing on unseen out-of-domain data also showcased an improvement of 2.9% in F1-score and 2.7% in sensitivity. This approach outperforms other colonoscopy data augmentation techniques and does not require re-training multiple generative models. It also effectively uses information from diverse public datasets, even those not specifically designed for the targeted downstream task, resulting in strong domain generalizability. Project code and model: https://github.com/DurrLab/GAN-Inversion.

3.
Bioengineering (Basel) ; 11(4)2024 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-38671812

RESUMEN

To investigate the potential of an affordable cryotherapy device for the accessible treatment of breast cancer, the performance of a novel carbon dioxide-based device was evaluated through both benchtop testing and an in vivo canine model. This novel device was quantitatively compared to a commercial device that utilizes argon gas as the cryogen. The thermal behavior of each device was characterized through calorimetry and by measuring the temperature profiles of iceballs generated in tissue phantoms. A 45 min treatment in a tissue phantom from the carbon dioxide device produced a 1.67 ± 0.06 cm diameter lethal isotherm that was equivalent to a 7 min treatment from the commercial argon-based device, which produced a 1.53 ± 0.15 cm diameter lethal isotherm. An in vivo treatment was performed with the carbon dioxide-based device in one spontaneously occurring canine mammary mass with two standard 10 min freezes. Following cryotherapy, this mass was surgically resected and analyzed for necrosis margins via histopathology. The histopathology margin of necrosis from the in vivo treatment with the carbon dioxide device at 14 days post-cryoablation was 1.57 cm. While carbon dioxide gas has historically been considered an impractical cryogen due to its low working pressure and high boiling point, this study shows that carbon dioxide-based cryotherapy may be equivalent to conventional argon-based cryotherapy in size of the ablation zone in a standard treatment time. The feasibility of the carbon dioxide device demonstrated in this study is an important step towards bringing accessible breast cancer treatment to women in low-resource settings.

4.
Med Image Anal ; 90: 102956, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37713764

RESUMEN

Screening colonoscopy is an important clinical application for several 3D computer vision techniques, including depth estimation, surface reconstruction, and missing region detection. However, the development, evaluation, and comparison of these techniques in real colonoscopy videos remain largely qualitative due to the difficulty of acquiring ground truth data. In this work, we present a Colonoscopy 3D Video Dataset (C3VD) acquired with a high definition clinical colonoscope and high-fidelity colon models for benchmarking computer vision methods in colonoscopy. We introduce a novel multimodal 2D-3D registration technique to register optical video sequences with ground truth rendered views of a known 3D model. The different modalities are registered by transforming optical images to depth maps with a Generative Adversarial Network and aligning edge features with an evolutionary optimizer. This registration method achieves an average translation error of 0.321 millimeters and an average rotation error of 0.159 degrees in simulation experiments where error-free ground truth is available. The method also leverages video information, improving registration accuracy by 55.6% for translation and 60.4% for rotation compared to single frame registration. 22 short video sequences were registered to generate 10,015 total frames with paired ground truth depth, surface normals, optical flow, occlusion, six degree-of-freedom pose, coverage maps, and 3D models. The dataset also includes screening videos acquired by a gastroenterologist with paired ground truth pose and 3D surface models. The dataset and registration source code are available at https://durr.jhu.edu/C3VD.

5.
IEEE Trans Biomed Eng ; 70(3): 1053-1061, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36129868

RESUMEN

OBJECTIVE: The diagnosis of urinary tract infection (UTI) currently requires precise specimen collection, handling infectious human waste, controlled urine storage, and timely transportation to modern laboratory equipment for analysis. Here we investigate holographic lens free imaging (LFI) to show its promise for enabling automatic urine analysis at the patient bedside. METHODS: We introduce an LFI system capable of resolving important urine clinical biomarkers such as red blood cells, white blood cells, crystals, and casts in 2 mm thick urine phantoms. RESULTS: This approach is sensitive to the particulate concentrations relevant for detecting several clinical urine abnormalities such as hematuria and pyuria, linearly correlating to ground truth hemacytometer measurements with R 2 = 0.9941 and R 2 = 0.9973, respectively. We show that LFI can estimate E. coli concentrations of 10 3 to 10 5 cells/mL by counting individual cells, and is sensitive to concentrations of 10 5 cells/mL to 10 8 cells/mL by analyzing hologram texture. Further, LFI measurements of blood cell concentrations are relatively insensitive to changes in bacteria concentrations of over seven orders of magnitude. Lastly, LFI reveals clear differences between UTI-positive and UTI-negative urine from human patients. CONCLUSION: LFI is sensitive to clinically-relevant concentrations of bacteria, blood cells, and other sediment in large urine volumes. SIGNIFICANCE: Together, these results show promise for LFI as a tool for urine screening, potentially offering early, point-of-care detection of UTI and other pathological processes.


Asunto(s)
Urinálisis , Infecciones Urinarias , Urinálisis/instrumentación , Urinálisis/métodos , Infecciones Urinarias/diagnóstico por imagen , Pruebas en el Punto de Atención/normas , Orina/citología , Orina/microbiología , Holografía , Humanos , Sensibilidad y Especificidad
6.
Opt Express ; 30(19): 33433-33448, 2022 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-36242380

RESUMEN

In-line lensless digital holography has great potential in multiple applications; however, reconstructing high-quality images from a single recorded hologram is challenging due to the loss of phase information. Typical reconstruction methods are based on solving a regularized inverse problem and work well under suitable image priors, but they are extremely sensitive to mismatches between the forward model and the actual imaging system. This paper aims to improve the robustness of such algorithms by introducing the adaptive sparse reconstruction method, ASR, which learns a properly constrained point spread function (PSF) directly from data, as opposed to solely relying on physics-based approximations of it. ASR jointly performs holographic reconstruction, PSF estimation, and phase retrieval in an unsupervised way by maximizing the sparsity of the reconstructed images. Like traditional methods, ASR uses the image formation model along with a sparsity prior, which, unlike recent deep learning approaches, allows for unsupervised reconstruction with as little as one sample. Experimental results in synthetic and real data show the advantages of ASR over traditional reconstruction methods, especially in cases where the theoretical PSF does not match that of the actual system.

7.
Sci Rep ; 12(1): 3714, 2022 03 08.
Artículo en Inglés | MEDLINE | ID: mdl-35260664

RESUMEN

The aim of this work is to evaluate the performance of a novel algorithm that combines dynamic wavefront aberrometry data and descriptors of the retinal image quality from objective autorefractor measurements to predict subjective refraction. We conducted a retrospective study of the prediction accuracy and precision of the novel algorithm compared to standard search-based retinal image quality optimization algorithms. Dynamic measurements from 34 adult patients were taken with a handheld wavefront autorefractor and static data was obtained with a high-end desktop wavefront aberrometer. The search-based algorithms did not significantly improve the results of the desktop system, while the dynamic approach was able to simultaneously reduce the standard deviation (up to a 15% for reduction of spherical equivalent power) and the mean bias error of the predictions (up to 80% reduction of spherical equivalent power) for the handheld aberrometer. These results suggest that dynamic retinal image analysis can substantially improve the accuracy and precision of the portable wavefront autorefractor relative to subjective refraction.


Asunto(s)
Errores de Refracción , Adulto , Humanos , Procedimientos Quirúrgicos Oftalmológicos , Refracción Ocular , Errores de Refracción/diagnóstico , Estudios Retrospectivos , Pruebas de Visión
8.
Biomed Opt Express ; 12(5): 2575-2585, 2021 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-34123489

RESUMEN

Oblique plane microscopy (OPM) enables high speed, volumetric fluorescence imaging through a single-objective geometry. While these advantages have positioned OPM as a valuable tool to probe biological questions in animal models, its potential for in vivo human imaging is largely unexplored due to its typical use with exogenous fluorescent dyes. Here we introduce a scattering-contrast oblique plane microscope (sOPM) and demonstrate label-free imaging of blood cells flowing through human capillaries in vivo. The sOPM illuminates a capillary bed in the ventral tongue with an oblique light sheet, and images side- and back- scattered signal from blood cells. By synchronizing the sOPM with a conventional capillaroscope, we acquire paired widefield and axial images of blood cells flowing through a capillary loop. The widefield capillaroscope image provides absorption contrast and confirms the presence of red blood cells (RBCs), while the sOPM image may aid in determining whether optical absorption gaps (OAGs) between RBCs have cellular or acellular composition. Further, we demonstrate consequential differences between fluorescence and scattering versions of OPM by imaging the same polystyrene beads sequentially with each technique. Lastly, we substantiate in vivo observations by imaging isolated red blood cells, white blood cells, and platelets in vitro using 3D agar phantoms. These results demonstrate a promising new avenue towards in vivo blood analysis.

9.
Lasers Surg Med ; 53(6): 748-775, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34015146

RESUMEN

This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.


Asunto(s)
Aprendizaje Profundo , Microscopía , Imagen Óptica , Óptica y Fotónica , Tomografía de Coherencia Óptica
10.
Med Image Anal ; 71: 102058, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33930829

RESUMEN

Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings, synthetically generated data as well as clinically in use conventional endoscope recording of the phantom colon with computed tomography(CT) scan ground truth. A Panda robotic arm, two commercially available capsule endoscopes, three conventional endoscopes with different camera properties, two high precision 3D scanners, and a CT scanner were employed to collect data from eight ex-vivo porcine gastrointestinal (GI)-tract organs and a silicone colon phantom model. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-datasets for colon, 12 sub-datasets for stomach, and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. To verify the applicability of this data for use with real clinical systems, we recorded a video sequence with a state-of-the-art colonoscope from a full representation silicon colon phantom. Synthetic capsule endoscopy frames from stomach, colon, and small intestine with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with a spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes that are commonly seen in endoscopic videos. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art: SC-SfMLearner, Monodepth2, and SfMLearner. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible as Supplementary Video 1.


Asunto(s)
Algoritmos , Endoscopía Capsular , Animales , Simulación por Computador , Fantasmas de Imagen , Porcinos , Tomografía Computarizada por Rayos X
11.
IEEE Access ; 9: 631-640, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33747680

RESUMEN

While data-driven approaches excel at many image analysis tasks, the performance of these approaches is often limited by a shortage of annotated data available for training. Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data, and that these representations can improve the performance of supervised tasks. Here, we demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images when compared to a fully-supervised baseline. We additionally benchmark improvements in domain adaptation and out-of-distribution detection, and demonstrate that semi-supervised learning outperforms supervised learning in both cases. In colonoscopy applications, these metrics are important given the skill required for endoscopic assessment of lesions, the wide variety of endoscopy systems in use, and the homogeneity that is typical of labeled datasets.

12.
Med Image Anal ; 70: 101990, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33609920

RESUMEN

Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360∘ camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification. All of the code, pre-trained weights and created 3D organ models of the virtual environment with detailed instructions how to setup and use the environment are made publicly available at https://github.com/CapsuleEndoscope/VirtualCapsuleEndoscopy and a video demonstration can be seen in the supplementary videos (Video-I).


Asunto(s)
Endoscopía Capsular , Robótica , Algoritmos , Simulación por Computador , Endoscopía , Humanos , Redes Neurales de la Computación
13.
Opt Lett ; 46(3): 673-676, 2021 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-33528438

RESUMEN

Spatial frequency domain imaging can map tissue scattering and absorption properties over a wide field of view, making it useful for clinical applications such as wound assessment and surgical guidance. This technique has previously required the projection of fully characterized illumination patterns. Here, we show that random and unknown speckle illumination can be used to sample the modulation transfer function of tissues at known spatial frequencies, allowing the quantitative mapping of optical properties with simple laser diode illumination. We compute low- and high-spatial frequency response parameters from the local power spectral density for each pixel and use a lookup table to accurately estimate absorption and scattering coefficients in tissue phantoms, in vivo human hand, and ex vivo swine esophagus. Because speckle patterns can be generated over a large depth of field and field of view with simple coherent illumination, this approach may enable optical property mapping in new form-factors and applications, including endoscopy.

14.
J Biomed Opt ; 25(11)2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-33251783

RESUMEN

SIGNIFICANCE: Spatial frequency-domain imaging (SFDI) is a powerful technique for mapping tissue oxygen saturation over a wide field of view. However, current SFDI methods either require a sequence of several images with different illumination patterns or, in the case of single-snapshot optical properties (SSOP), introduce artifacts and sacrifice accuracy. AIM: We introduce OxyGAN, a data-driven, content-aware method to estimate tissue oxygenation directly from single structured-light images. APPROACH: OxyGAN is an end-to-end approach that uses supervised generative adversarial networks. Conventional SFDI is used to obtain ground truth tissue oxygenation maps for ex vivo human esophagi, in vivo hands and feet, and an in vivo pig colon sample under 659- and 851-nm sinusoidal illumination. We benchmark OxyGAN by comparing it with SSOP and a two-step hybrid technique that uses a previously developed deep learning model to predict optical properties followed by a physical model to calculate tissue oxygenation. RESULTS: When tested on human feet, cross-validated OxyGAN maps tissue oxygenation with an accuracy of 96.5%. When applied to sample types not included in the training set, such as human hands and pig colon, OxyGAN achieves a 93% accuracy, demonstrating robustness to various tissue types. On average, OxyGAN outperforms SSOP and a hybrid model in estimating tissue oxygenation by 24.9% and 24.7%, respectively. Finally, we optimize OxyGAN inference so that oxygenation maps are computed ∼10 times faster than previous work, enabling video-rate, 25-Hz imaging. CONCLUSIONS: Due to its rapid acquisition and processing speed, OxyGAN has the potential to enable real-time, high-fidelity tissue oxygenation mapping that may be useful for many clinical applications.


Asunto(s)
Aprendizaje Profundo , Animales , Mano , Pulmón , Porcinos
15.
IEEE Trans Med Imaging ; 39(12): 4297-4309, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32795966

RESUMEN

Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conventional endoscopy and is likely to do the same for capsule endoscopy. In this work, we propose and quantitatively validate a novel framework to learn a mapping from low-to-high-resolution endoscopic images. We combine conditional adversarial networks with a spatial attention block to improve the resolution by up to factors of 8× , 10× , 12× , respectively. Quantitative and qualitative studies demonstrate the superiority of EndoL2H over state-of-the-art deep super-resolution methods Deep Back-Projection Networks (DBPN), Deep Residual Channel Attention Networks (RCAN) and Super Resolution Generative Adversarial Network (SRGAN). Mean Opinion Score (MOS) tests were performed by 30 gastroenterologists qualitatively assess and confirm the clinical relevance of the approach. EndoL2H is generally applicable to any endoscopic capsule system and has the potential to improve diagnosis and better harness computational approaches for polyp detection and characterization. Our code and trained models are available at https://github.com/CapsuleEndoscope/EndoL2H.


Asunto(s)
Endoscopía Capsular
16.
Opt Express ; 28(13): 19641-19654, 2020 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-32672237

RESUMEN

Poor access to eye care is a major global challenge that could be ameliorated by low-cost, portable, and easy-to-use diagnostic technologies. Diffuser-based imaging has the potential to enable inexpensive, compact optical systems that can reconstruct a focused image of an object over a range of defocus errors. Here, we present a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Compared to existing diffuser-imager architectures, our system features an infinite-conjugate design by relaying the ocular lens onto the diffuser. This offers shift-invariance across a wide field-of-view (FOV) and an invariant magnification across an extended depth range. Experimentally, we demonstrate fundus image reconstruction over a 33° FOV and robustness to ±4D refractive error using a constant point-spread-function. Combined with diffuser-based wavefront sensing, this technology could enable combined ocular aberrometry and funduscopic screening through a single diffuser sensor.


Asunto(s)
Diagnóstico por Imagen/instrumentación , Procesamiento de Imagen Asistido por Computador/instrumentación , Oftalmoscopios , Retina/diagnóstico por imagen , Diseño de Equipo , Humanos , Luz , Modelos Teóricos
17.
Biomed Opt Express ; 11(6): 3091-3094, 2020 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-32637243

RESUMEN

This feature issue of Biomedical Optics Express presents a cross-section of interesting and emerging work of relevance to optical technologies in low-resource settings. In particular, the technologies described here aim to address challenges to meeting healthcare needs in resource-constrained environments, including in rural and underserved areas. This collection of 18 papers includes papers on both optical system design and image analysis, with applications demonstrated for ex vivo and in vivo use. All together, these works portray the importance of global health research to the scientific community and the role that optics can play in addressing some of the world's most pressing healthcare challenges.

18.
Biomed Opt Express ; 11(5): 2373-2382, 2020 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-32499930

RESUMEN

We present a non-invasive, label-free method of imaging blood cells flowing through human capillaries in vivo using oblique back-illumination capillaroscopy (OBC). Green light illumination allows simultaneous phase and absorption contrast, enhancing the ability to distinguish red and white blood cells. Single-sided illumination through the objective lens enables 200 Hz imaging with close illumination-detection separation and a simplified setup. Phase contrast is optimized when the illumination axis is offset from the detection axis by approximately 225 µm when imaging ∼80 µm deep in phantoms and human ventral tongue. We demonstrate high-speed imaging of individual red blood cells, white blood cells with sub-cellular detail, and platelets flowing through capillaries and vessels in human tongue. A custom pneumatic cap placed over the objective lens stabilizes the field of view, enabling longitudinal imaging of a single capillary for up to seven minutes. We present high-quality images of blood cells in individuals with Fitzpatrick skin phototypes II, IV, and VI, showing that the technique is robust to high peripheral melanin concentration. The signal quality, speed, simplicity, and robustness of this approach underscores its potential for non-invasive blood cell counting.

19.
Biomed Opt Express ; 11(5): 2560-2569, 2020 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-32499943

RESUMEN

Targeted vector control strategies aiming to prevent mosquito borne disease are severely limited by the logistical burden of vector surveillance, the monitoring of an area to understand mosquito species composition, abundance and spatial distribution. We describe development of an imaging system within a mosquito trap to remotely identify caught mosquitoes, including selection of the image resolution requirement, a design to meet that specification, and evaluation of the system. The necessary trap image resolution was determined to be 16 lp/mm, or 31.25um. An optics system meeting these specifications was implemented in a BG-GAT mosquito trap. Its ability to provide images suitable for accurate specimen identification was evaluated by providing entomologists with images of individual specimens, taken either with a microscope or within the trap and asking them to provide a species identification, then comparing these results. No difference in identification accuracy between the microscope and the trap images was found; however, due to limitations of human species classification from a single image, the system is only able to provide accurate genus-level mosquito classification. Further integration of this system with machine learning computer vision algorithms has the potential to provide near-real time mosquito surveillance data at the species level.

20.
Biomed Opt Express ; 11(4): 2268-2276, 2020 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-32341882

RESUMEN

Quantification of optical absorption gaps in nailfold capillaries has recently shown promise as a non-invasive technique for neutropenia screening. Here we demonstrate a low-cost, portable attachment to a mobile phone that can resolve optical absorption gaps in nailfold capillaries using a reverse lens technique and oblique 520nm illumination. Resolution <4µm within a 1mm2 on-axis region is demonstrated, and wide field of view (3.5mm × 4.8mm) imaging is achieved with resolution <6µm in the periphery. Optical absorption gaps (OAGs) are visible in superficial capillary loops of a healthy human participant by an ∼8-fold difference in contrast-to-noise ratio with respect to red blood cell absorption contrast. High speed video capillaroscopy up to 240 frames per second (fps) is possible, though 60fps is sufficient to resolve an average frequency of 37 OAGs/minute passing through nailfold capillaries. The simplicity and portability of this technique may enable the development of an effective non-invasive tool for white blood cell screening in point-of-care and global health settings.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA