Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Digit Health ; 5: 1301019, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38075521

RESUMO

Smartphone camera photoplethysmography (cPPG) enables non-invasive pulse oximetry and hemoglobin concentration measurements. However, the aesthetic-driven non-linearity in default image capture and preprocessing pipelines poses challenges for consistency and transferability of cPPG across devices. This work identifies two key parameters-tone mapping and sensor threshold-that significantly impact cPPG measurements. We propose a novel calibration method to linearize camera measurements, thus enhancing consistency and transferability of cPPG across devices. A benchtop calibration system is also presented, leveraging a microcontroller and LED setup to characterize these parameters for each phone model. Our validation studies demonstrate that, with appropriate calibration and camera settings, cPPG applications can achieve 74% higher accuracy than with default settings. Moreover, our calibration method proves effective across different smartphone models (N=4), and calibrations performed on one phone can be applied to other smartphones of the same model (N=6), enhancing consistency and scalability of cPPG applications.

2.
Sci Rep ; 13(1): 8105, 2023 05 29.
Artigo em Inglês | MEDLINE | ID: mdl-37248245

RESUMO

We propose an ultra-low-cost at-home blood pressure monitor that leverages a plastic clip with a spring-loaded mechanism to enable a smartphone with a flash LED and camera to measure blood pressure. Our system, called BPClip, is based on the scientific premise of measuring oscillometry at the fingertip to measure blood pressure. To enable a smartphone to measure the pressure applied to the digital artery, a moveable pinhole projection moves closer to the camera as the user presses down on the clip with increased force. As a user presses on the device with increased force, the spring-loaded mechanism compresses. The size of the pinhole thus encodes the pressure applied to the finger. In conjunction, the brightness fluctuation of the pinhole projection correlates to the arterial pulse amplitude. By capturing the size and brightness of the pinhole projection with the built-in camera, the smartphone can measure a user's blood pressure with only a low-cost, plastic clip and an app. Unlike prior approaches, this system does not require a blood pressure cuff measurement for a user-specific calibration compared to pulse transit time and pulse wave analysis based blood pressure monitoring solutions. Our solution also does not require specialized smartphone models with custom sensors. Our early feasibility finding demonstrates that in a validation study with N = 29 participants with systolic blood pressures ranging from 88 to 157 mmHg, the BPClip system can achieve a mean absolute error of 8.72 and 5.49 for systolic and diastolic blood pressure, respectively. In an estimated cost projection study, we demonstrate that in small-batch manufacturing of 1000 units, the material cost is an estimated $0.80, suggesting that at full-scale production, our proposed BPClip concept can be produced at very low cost compared to existing cuff-based monitors for at-home blood pressure management.


Assuntos
Determinação da Pressão Arterial , Smartphone , Humanos , Pressão Sanguínea/fisiologia , Monitores de Pressão Arterial , Calibragem , Análise de Onda de Pulso
4.
Res Sq ; 2023 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-36909577

RESUMO

We propose BPClip, a less than $ 1 USD blood pressure monitor that leverages a plastic clip with a spring-loaded mechanism to enable any smartphone with a flash LED and a camera to measure blood pressure. Unlike prior approaches, our system measured systolic, mean, and diastolic blood pressure using oscillometric measurements that avoid cumbersome per-user calibrations and does not require specialized smartphone models with custom sensors.

5.
Opt Express ; 30(26): 46324-46335, 2022 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-36558589

RESUMO

Conventional models for lensless imaging assume that each measurement results from convolving a given scene with a single experimentally measured point-spread function. These models fail to simulate lensless cameras truthfully, as these models do not account for optical aberrations or scenes with depth variations. Our work shows that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature without demanding a large network capacity. We show that embedding learnable forward and adjoint models improves the reconstruction quality of lensless images (+5dB PSNR) compared to works that assume a fixed point-spread function.

6.
Light Sci Appl ; 9(1): 183, 2020 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-33298828

RESUMO

Dynamic axial focusing functionality has recently experienced widespread incorporation in microscopy, augmented/virtual reality (AR/VR), adaptive optics and material processing. However, the limitations of existing varifocal tools continue to beset the performance capabilities and operating overhead of the optical systems that mobilize such functionality. The varifocal tools that are the least burdensome to operate (e.g. liquid crystal, elastomeric or optofluidic lenses) suffer from low (≈100 Hz) refresh rates. Conversely, the fastest devices sacrifice either critical capabilities such as their dwelling capacity (e.g. acoustic gradient lenses or monolithic micromechanical mirrors) or low operating overhead (e.g. deformable mirrors). Here, we present a general-purpose random-access axial focusing device that bridges these previously conflicting features of high speed, dwelling capacity and lightweight drive by employing low-rigidity micromirrors that exploit the robustness of defocusing phase profiles. Geometrically, the device consists of an 8.2 mm diameter array of piston-motion and 48-µm-pitch micromirror pixels that provide 2π phase shifting for wavelengths shorter than 1100 nm with 10-90% settling in 64.8 µs (i.e., 15.44 kHz refresh rate). The pixels are electrically partitioned into 32 rings for a driving scheme that enables phase-wrapped operation with circular symmetry and requires <30 V per channel. Optical experiments demonstrated the array's wide focusing range with a measured ability to target 29 distinct resolvable depth planes. Overall, the features of the proposed array offer the potential for compact, straightforward methods of tackling bottlenecked applications, including high-throughput single-cell targeting in neurobiology and the delivery of dense 3D visual information in AR/VR.

7.
Light Sci Appl ; 9: 171, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33082940

RESUMO

Miniature fluorescence microscopes are a standard tool in systems biology. However, widefield miniature microscopes capture only 2D information, and modifications that enable 3D capabilities increase the size and weight and have poor resolution outside a narrow depth range. Here, we achieve the 3D capability by replacing the tube lens of a conventional 2D Miniscope with an optimized multifocal phase mask at the objective's aperture stop. Placing the phase mask at the aperture stop significantly reduces the size of the device, and varying the focal lengths enables a uniform resolution across a wide depth range. The phase mask encodes the 3D fluorescence intensity into a single 2D measurement, and the 3D volume is recovered by solving a sparsity-constrained inverse problem. We provide methods for designing and fabricating the phase mask and an efficient forward model that accounts for the field-varying aberrations in miniature objectives. We demonstrate a prototype that is 17 mm tall and weighs 2.5 grams, achieving 2.76 µm lateral, and 15 µm axial resolution across most of the 900 × 700 × 390 µm3 volume at 40 volumes per second. The performance is validated experimentally on resolution targets, dynamic biological samples, and mouse brain tissue. Compared with existing miniature single-shot volume-capture implementations, our system is smaller and lighter and achieves a more than 2× better lateral and axial resolution throughout a 10× larger usable depth range. Our microscope design provides single-shot 3D imaging for applications where a compact platform matters, such as volumetric neural imaging in freely moving animals and 3D motion studies of dynamic samples in incubators and lab-on-a-chip devices.

8.
Opt Express ; 28(20): 28969-28986, 2020 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-33114805

RESUMO

Light field microscopy (LFM) uses a microlens array (MLA) near the sensor plane of a microscope to achieve single-shot 3D imaging of a sample without any moving parts. Unfortunately, the 3D capability of LFM comes with a significant loss of lateral resolution at the focal plane. Placing the MLA near the pupil plane of the microscope, instead of the image plane, can mitigate the artifacts and provide an efficient forward model, at the expense of field-of-view (FOV). Here, we demonstrate improved resolution across a large volume with Fourier DiffuserScope, which uses a diffuser in the pupil plane to encode 3D information, then computationally reconstructs the volume by solving a sparsity-constrained inverse problem. Our diffuser consists of randomly placed microlenses with varying focal lengths; the random positions provide a larger FOV compared to a conventional MLA, and the diverse focal lengths improve the axial depth range. To predict system performance based on diffuser parameters, we, for the first time, establish a theoretical framework and design guidelines, which are verified by numerical simulations, and then build an experimental system that achieves < 3 µm lateral and 4 µm axial resolution over a 1000 × 1000 × 280 µm3 volume. Our diffuser design outperforms the MLA used in LFM, providing more uniform resolution over a larger volume, both laterally and axially.

9.
Opt Express ; 27(20): 28075-28090, 2019 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-31684566

RESUMO

Mask-based lensless imagers are smaller and lighter than traditional lensed cameras. In these imagers, the sensor does not directly record an image of the scene; rather, a computational algorithm reconstructs it. Typically, mask-based lensless imagers use a model-based reconstruction approach that suffers from long compute times and a heavy reliance on both system calibration and heuristically chosen denoisers. In this work, we address these limitations using a bounded-compute, trainable neural network to reconstruct the image. We leverage our knowledge of the physical system by unrolling a traditional model-based optimization algorithm, whose parameters we optimize using experimentally gathered ground-truth data. Optionally, images produced by the unrolled network are then fed into a jointly-trained denoiser. As compared to traditional methods, our architecture achieves better perceptual image quality and runs 20× faster, enabling interactive previewing of the scene. We explore a spectrum between model-based and deep learning methods, showing the benefits of using an intermediate approach. Finally, we test our network on images taken in the wild with a prototype mask-based camera, demonstrating that our network generalizes to natural images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA