Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 255
Filtrar
1.
Opt Express ; 32(4): 4916-4930, 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38439231

RESUMO

In this paper, we assess the noise-susceptibility of coherent macroscopic single random phase encoding (SRPE) lensless imaging by analyzing how much information is lost due to the presence of camera noise. We have used numerical simulation to first obtain the noise-free point spread function (PSF) of a diffuser-based SRPE system. Afterwards, we generated a noisy PSF by introducing shot noise, read noise and quantization noise as seen in a real-world camera. Then, we used various statistical measures to look at how the shared information content between the noise-free and noisy PSF is affected as the camera-noise becomes stronger. We have run identical simulations by replacing the diffuser in the lensless SRPE imaging system with lenses for comparison with lens-based imaging. Our results show that SRPE lensless imaging systems are better at retaining information between corresponding noisy and noiseless PSFs under high camera noise than lens-based imaging systems. We have also looked at how physical parameters of diffusers such as feature size and feature height variation affect the noise robustness of an SRPE system. To the best of our knowledge, this is the first report to investigate noise robustness of SRPE systems as a function of diffuser parameters and paves the way for the use of lensless SRPE systems to improve imaging in the presence of image sensor noise.

2.
Opt Express ; 32(4): 5943-5955, 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38439309

RESUMO

In many areas ranging from medical imaging to visual entertainment, 3D information acquisition and display is a key task. In this regard, in multifocus computational imaging, stacks of images of a certain 3D scene are acquired under different focus configurations and are later combined by means of post-capture algorithms based on image formation model in order to synthesize images with novel viewpoints of the scene. Stereoscopic augmented reality devices, through which is possible to simultaneously visualize the three dimensional real world along with overlaid digital stereoscopic image pair, could benefit from the binocular content allowed by multifocus computational imaging. Spatial perception of the displayed stereo pairs can be controlled by synthesizing the desired point of view of each image of the stereo-pair along with their parallax setting. The proposed method has the potential to alleviate the accommodation-convergence conflict and make augmented reality stereoscopic devices less vulnerable to visual fatigue.

3.
Opt Express ; 32(5): 7495-7512, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38439428

RESUMO

Integral imaging has proven useful for three-dimensional (3D) object visualization in adverse environmental conditions such as partial occlusion and low light. This paper considers the problem of 3D object tracking. Two-dimensional (2D) object tracking within a scene is an active research area. Several recent algorithms use object detection methods to obtain 2D bounding boxes around objects of interest in each frame. Then, one bounding box can be selected out of many for each object of interest using motion prediction algorithms. Many of these algorithms rely on images obtained using traditional 2D imaging systems. A growing literature demonstrates the advantage of using 3D integral imaging instead of traditional 2D imaging for object detection and visualization in adverse environmental conditions. Integral imaging's depth sectioning ability has also proven beneficial for object detection and visualization. Integral imaging captures an object's depth in addition to its 2D spatial position in each frame. A recent study uses integral imaging for the 3D reconstruction of the scene for object classification and utilizes the mutual information between the object's bounding box in this 3D reconstructed scene and the 2D central perspective to achieve passive depth estimation. We build over this method by using Bayesian optimization to track the object's depth in as few 3D reconstructions as possible. We study the performance of our approach on laboratory scenes with occluded objects moving in 3D and show that the proposed approach outperforms 2D object tracking. In our experimental setup, mutual information-based depth estimation with Bayesian optimization achieves depth tracking with as few as two 3D reconstructions per frame which corresponds to the theoretical minimum number of 3D reconstructions required for depth estimation. To the best of our knowledge, this is the first report on 3D object tracking using the proposed approach.

4.
Opt Express ; 32(2): 1489-1500, 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38297699

RESUMO

We propose a diffuser-based lensless underwater optical signal detection system. The system consists of a lensless one-dimensional (1D) camera array equipped with random phase modulators for signal acquisition and one-dimensional integral imaging convolutional neural network (1DInImCNN) for signal classification. During the acquisition process, the encoded signal transmitted by a light-emitting diode passes through a turbid medium as well as partial occlusion. The 1D diffuser-based lensless camera array is used to capture the transmitted information. The captured pseudorandom patterns are then classified through the 1DInImCNN to output the desired signal. We compared our proposed underwater lensless optical signal detection system with an equivalent lens-based underwater optical signal detection system in terms of detection performance and computational cost. The results show that the former outperforms the latter. Moreover, we use dimensionality reduction on the lensless pattern and study their theoretical computational costs and detection performance. The results show that the detection performance of lensless systems does not suffer appreciably. This makes lensless systems a great candidate for low-cost compressive underwater optical imaging and signal detection.

5.
Opt Express ; 32(2): 1825-1835, 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38297725

RESUMO

Image restoration and denoising has been a challenging problem in optics and computer vision. There has been active research in the optics and imaging communities to develop a robust, data-efficient system for image restoration tasks. Recently, physics-informed deep learning has received wide interest in scientific problems. In this paper, we introduce a three-dimensional integral imaging-based physics-informed unsupervised CycleGAN algorithm for underwater image descattering and recovery using physics-informed CycleGAN (Generative Adversarial Network). The system consists of a forward and backward pass. The base architecture consists of an encoder and a decoder. The encoder takes the clean image along with the depth map and the degradation parameters to produce the degraded image. The decoder takes the degraded image generated by the encoder along with the depth map and produces the clean image along with the degradation parameters. In order to provide physical significance for the input degradation parameter w.r.t a physical model for the degradation, we also incorporated the physical model into the loss function. The proposed model has been assessed under the dataset curated through underwater experiments at various levels of turbidity. In addition to recovering the original image from the degraded image, the proposed algorithm also helps to model the distribution under which the degraded images have been sampled. Furthermore, the proposed three-dimensional Integral Imaging approach is compared with the traditional deep learning-based approach and 2D imaging approach under turbid and partially occluded environments. The results suggest the proposed approach is promising, especially under the above experimental conditions.

6.
Opt Express ; 32(2): 1789-1801, 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38297723

RESUMO

Underwater scattering caused by suspended particles in the water severely degrades signal detection performance and poses significant challenges to the problem of object detection. This paper introduces an integrated dual-function deep learning-based underwater object detection and classification and temporal signal detection algorithm using three-dimensional (3D) integral imaging (InIm) under degraded conditions. The proposed system is an efficient object classification and temporal signal detection system for degraded environments such as turbidity and partial occlusion and also provides the object range in the scene. A camera array captures the underwater objects in the scene and the temporally encoded binary signals transmitted for the purpose of communication. The network is trained using a clear underwater scene without occlusion, whereas test data is collected in turbid water with partial occlusion. Reconstructed 3D data is the input to a You Look Only Once (YOLOv4) neural network for object detection and a convolutional neural network-based bidirectional long short-term memory network (CNN-BiLSTM) is used for temporal optical signal detection. Finally, the transmitted signal is decoded. In our experiments, 3D InIm provides better image reconstruction in a degraded environment over 2D sensing-based methods. Also, reconstructed 3D images segment out the object of interest from occlusions and background which improves the detection accuracy of the network with 3D InIm. To the best of our knowledge, this is the first report that combines deep learning with 3D InIm for simultaneous and integrated underwater object detection and optical signal detection in degraded environments.

7.
Opt Lett ; 48(15): 4009-4012, 2023 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-37527105

RESUMO

The two-point-source resolution criterion is widely used to quantify the performance of imaging systems. The two main approaches for the computation of the two-point-source resolution are the detection theoretic and visual analyses. The first assumes a shift-invariant system and lacks the ability to incorporate two different point spread functions (PSFs), which may be required in certain situations like computing axial resolution. The latter approach, which includes the Rayleigh criterion, relies on the peak-to-valley ratio and does not properly account for the presence of noise. We present a heuristic generalization of the visual two-point-source resolution criterion using Gaussian processes (GP). This heuristic criterion is applicable to both shift-invariant and shift-variant imaging modalities. This criterion can also incorporate different definitions of resolution expressed in terms of varying peak-to-valley ratios. Our approach implicitly incorporates information about noise statistics such as the variance or signal-to-noise ratio by making assumptions about the spatial correlation of PSFs in the form of kernel functions. Also, it does not rely on an analytic form of the PSF.

8.
Opt Express ; 31(14): 22863-22884, 2023 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-37475387

RESUMO

Integral imaging (InIm) is useful for passive ranging and 3D visualization of partially-occluded objects. We consider 3D object localization within a scene and in occlusions. 2D localization can be achieved using machine learning and non-machine learning-based techniques. These techniques aim to provide a 2D bounding box around each one of the objects of interest. A recent study uses InIm for the 3D reconstruction of the scene with occlusions and utilizes mutual information (MI) between the bounding box in this 3D reconstructed scene and the corresponding bounding box in the central elemental image to achieve passive depth estimation of partially occluded objects. Here, we improve upon this InIm method by using Bayesian optimization to minimize the number of required 3D scene reconstructions. We evaluate the performance of the proposed approach by analyzing different kernel functions, acquisition functions, and parameter estimation algorithms for Bayesian optimization-based inference for simultaneous depth estimation of objects and occlusion. In our optical experiments, mutual-information-based depth estimation with Bayesian optimization achieves depth estimation with a handful of 3D reconstructions. To the best of our knowledge, this is the first report to use Bayesian optimization for mutual information-based InIm depth estimation.

9.
Opt Express ; 31(7): 11213-11226, 2023 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-37155762

RESUMO

In this paper, we have used the angular spectrum propagation method and numerical simulations of a single random phase encoding (SRPE) based lensless imaging system, with the goal of quantifying the spatial resolution of the system and assessing its dependence on the physical parameters of the system. Our compact SRPE imaging system consists of a laser diode that illuminates a sample placed on a microscope glass slide, a diffuser that spatially modulates the optical field transmitting through the input object, and an image sensor that captures the intensity of the modulated field. We have considered two-point source apertures as the input object and analyzed the propagated optical field captured by the image sensor. The captured output intensity patterns acquired at each lateral separation between the input point sources were analyzed using a correlation between the captured output pattern for the overlapping point-sources, and the captured output intensity for the separated point sources. The lateral resolution of the system was calculated by finding the lateral separation values of the point sources for which the correlation falls below a threshold value of 35% which is a value chosen in accordance with the Abbe diffraction limit of an equivalent lens-based system. A direct comparison between the SRPE lensless imaging system and an equivalent lens-based imaging system with similar system parameters shows that despite being lensless, the performance of the SRPE system does not suffer as compared to lens-based imaging systems in terms of lateral resolution. We have also investigated how this resolution is affected as the parameters of the lensless imaging system are varied. The results show that SRPE lensless imaging system shows robustness to object to diffuser-to-sensor distance, pixel size of the image sensor, and the number of pixels of the image sensor. To the best of our knowledge, this is the first work to investigate a lensless imaging system's lateral resolution, robustness to multiple physical parameters of the system, and comparison to lens-based imaging systems.

10.
Opt Express ; 31(7): 11557-11560, 2023 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-37155788

RESUMO

This Feature Issue of Optics Express is organized in conjunction with the 2022 Optica conference on 3D Image Acquisition and Display: Technology, Perception and Applications which was held in hybrid format from 11 to 15, July 2022 as part of the Imaging and Applied Optics Congress and Optical Sensors and Sensing Congress 2022 in Vancouver, Canada. This Feature Issue presents 31 articles which cover the topics and scope of the 2022 3D Image Acquisition and Display conference. This Introduction provides a summary of these published articles that appear in this Feature Issue.

11.
Opt Express ; 31(2): 1367-1385, 2023 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-36785173

RESUMO

Underwater optical signal detection performance suffers from occlusion and turbidity in degraded environments. To tackle these challenges, three-dimensional (3D) integral imaging (InIm) with 4D correlation-based and deep-learning-based signal detection approaches have been proposed previously. Integral imaging is a 3D technique that utilizes multiple cameras to capture multiple perspectives of the scene and uses dedicated algorithms to reconstruct 3D images. However, these systems may require high computational requirements, multiple separate preprocessing steps, and the necessity for 3D image reconstruction and depth estimation of the illuminating modulated light source. In this paper, we propose an end-to-end integrated signal detection pipeline that uses the principle of one-dimensional (1D) InIm to capture angular and intensity of ray information but without the computational burden of full 3D reconstruction and depth estimation of the light source. The system is implemented with a 1D camera array instead of 2D camera array and is trained with a convolutional neural network (CNN). The proposed approach addresses many of the aforementioned shortcomings to improve underwater optical signal detection speed and performance. In our experiment, the temporal-encoded signals are transmitted by a light-emitting diode passing through a turbid and partial occluded environment which are captured by a 1D camera array. Captured video frames containing the spatiotemporal information of the optical signals are then fed into the CNN for signal detection without the need for depth estimation and 3D scene reconstruction. Thus, the entire processing steps are integrated and optimized by deep learning. We compare the proposed approach with the previously reported depth estimated 3D InIm with 3D scene reconstruction and deep learning in terms of computational cost at receiver's end and detection performance. Moreover, a comparison with conventional 2D imaging is also included. The experimental results show that the proposed approach performs well in terms of detection performance and computational cost. To the best of our knowledge, this is the first report on signal detection in degraded environments with computationally efficient end-to-end integrated 1D InIm capture stage with integrated deep learning for classification.

12.
Opt Express ; 31(1): 479-491, 2023 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-36606982

RESUMO

In this paper, we address the problem of object recognition in degraded environments including fog and partial occlusion. Both long wave infrared (LWIR) imaging systems and LiDAR (time of flight) imaging systems using Azure Kinect, which combine conventional visible and lidar sensing information, have been previously demonstrated for object recognition in ideal conditions. However, the object detection performance of Azure Kinect depth imaging systems may decrease significantly in adverse weather conditions such as fog, rain, and snow. The concentration of fog degrades the depth images of Azure Kinect camera, and the overall visibility of RGBD images (fused RGB and depth image), which can make object recognition tasks challenging. LWIR imaging may avoid these issues of lidar-based imaging systems. However, due to poor spatial resolution of LWIR cameras, thermal imaging provides limited textural information within a scene and hence may fail to provide adequate discriminatory information to identify between objects of similar texture, shape and size. To improve the object detection task in fog and occlusion, we use three-dimensional (3D) integral imaging (InIm) system with a visible range camera. 3D InIm provides depth information, mitigates the occlusion and fog in front of the object, and improves the object recognition capabilities. For object recognition, the YOLOv3 neural network is used for each of the tested imaging systems. Since the concentration of fog affects the images from different sensors (visible, LWIR, and Azure Kinect depth cameras) in different ways, we compared the performance of the network on these images in terms of average precision and average miss rate. For the experiments we conducted, the results indicate that in degraded environment 3D InIm using visible range cameras can provide better image reconstruction as compared to the LWIR camera and Azure Kinect RGBD camera, and therefore it may improve the detection accuracy of the network. To the best of our knowledge, this is the first report comparing the performance of object detection between passive integral imaging system vs active (LiDAR) sensing in degraded environments such as fog and partial occlusion.

13.
Opt Express ; 30(24): 43157-43171, 2022 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-36523020

RESUMO

Integral imaging (InIm) has proved useful for three-dimensional (3D) object sensing, visualization, and classification of partially occluded objects. This paper presents an information-theoretic approach for simulating and evaluating the integral imaging capture and reconstruction process. We utilize mutual information (MI) as a metric for evaluating the fidelity of the reconstructed 3D scene. Also we consider passive depth estimation using mutual information. We apply this formulation for optimal pitch estimation of integral-imaging capture and reconstruction to maximize the longitudinal resolution. The effect of partial occlusion in integral imaging 3D reconstruction using mutual information is evaluated. Computer simulation tests and experiments are presented.

14.
Biomed Opt Express ; 13(10): 5377-5389, 2022 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-36425632

RESUMO

We present an automated method for COVID-19 screening using the intra-patient population distributions of bio-optical attributes extracted from digital holographic microscopy reconstructed red blood cells. Whereas previous approaches have aimed to identify infection by classifying individual cells, here, we propose an approach to incorporate the attribute distribution information from the population of a given human subjects' cells into our classification scheme and directly classify subjects at the patient level. To capture the intra-patient distribution information in a generalized way, we propose an approach based on the Bag-of-Features (BoF) methodology to transform histograms of bio-optical attribute distributions into feature vectors for classification via a linear support vector machine. We compare our approach with simpler classifiers directly using summary statistics such as mean, standard deviation, skewness, and kurtosis of the distributions. We also compare to a k-nearest neighbor classifier using the Kolmogorov-Smirnov distance as a distance metric between the attribute distributions of each subject. We lastly compare our approach to previously published methods for classification of individual red blood cells. In each case, the methodology proposed in this paper provides the highest patient classification performance, correctly classifying 22 out of 24 individuals and achieving 91.67% classification accuracy with 90.00% sensitivity and 92.86% specificity. The incorporation of distribution information for classification additionally led to the identification of a singular temporal-based bio-optical attribute capable of highly accurate patient classification. To the best of our knowledge, this is the first report of a machine learning approach using the intra-patient probability distribution information of bio-optical attributes obtained from digital holographic microscopy for disease screening.

15.
Opt Express ; 30(20): 35965-35977, 2022 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-36258535

RESUMO

We present a compact, field portable, lensless, single random phase encoding biosensor for automated classification between healthy and sickle cell disease human red blood cells. Microscope slides containing 3 µl wet mounts of whole blood samples from healthy and sickle cell disease afflicted human donors are input into a lensless single random phase encoding (SRPE) system for disease identification. A partially coherent laser source (laser diode) illuminates the cells under inspection wherein the object complex amplitude propagates to and is pseudorandomly encoded by a diffuser, then the intensity of the diffracted complex waveform is captured by a CMOS image sensor. The recorded opto-biological signatures are transformed using local binary pattern map generation during preprocessing then input into a pretrained convolutional neural network for classification between healthy and disease-states. We further provide analysis that compares the performance of several neural network architectures to optimize our classification strategy. Additionally, we assess the performance and computational savings of classifying on subsets of the opto-biological signatures with substantially reduced dimensionality, including one dimensional cropping of the recorded signatures. To the best of our knowledge, this is the first report of a lensless SRPE biosensor for human disease identification. As such, the presented approach and results can be significant for low-cost disease identification both in the field and for healthcare systems in developing countries which suffer from constrained resources.


Assuntos
Anemia Falciforme , Técnicas Biossensoriais , Humanos , Redes Neurais de Computação , Eritrócitos , Anemia Falciforme/diagnóstico
16.
Opt Express ; 30(16): 29234-29245, 2022 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-36299102

RESUMO

In this manuscript, we describe the development of a single shot, self-referencing wavefront division, multiplexing digital holographic microscope employing LED sources for large field of view quantitative phase imaging of biological samples. To address the difficulties arising while performing interferometry with low temporally coherent sources, an optical arrangement utilizing multiple Fresnel Biprisms is used for hologram multiplexing, enhancing the field of view and increasing the signal to noise ratio. Biprisms offers the ease of obtaining interference patterns by automatically matching the path length between the two off-axis beams. The use of low temporally coherent sources reduces the speckle noise and the cost, and the form factor of the setup. The developed technique was implemented using both visible and UV LEDs and tested on polystyrene microspheres and human erythrocytes.


Assuntos
Holografia , Poliestirenos , Humanos , Microscopia de Contraste de Fase , Holografia/métodos , Interferometria/métodos , Eritrócitos
17.
Opt Express ; 30(2): 1205-1218, 2022 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-35209285

RESUMO

Traditionally, long wave infrared imaging has been used in photon starved conditions for object detection and classification. We investigate passive three-dimensional (3D) integral imaging (InIm) in visible spectrum for object classification using deep neural networks in photon-starved conditions and under partial occlusion. We compare the proposed passive 3D InIm operating in the visible domain with that of the long wave infrared sensing in both 2D and 3D imaging cases for object classification in degraded conditions. This comparison is based on average precision, recall, and miss rates. Our experimental results demonstrate that cold and hot object classification using 3D InIm in the visible spectrum may outperform both 2D and 3D imaging implemented in long wave infrared spectrum for photon-starved and partially occluded scenes. While these experiments are not comprehensive, they demonstrate the potential of 3D InIm in the visible spectrum for low light applications. Imaging in the visible spectrum provides higher spatial resolution, more compact optics, and lower cost hardware compared with long wave infrared imaging. In addition, higher spatial resolution obtained in the visible spectrum can improve object classification accuracy. Our experimental results provide a proof of concept for implementing visible spectrum imaging in place of the traditional LWIR spectrum imaging for certain object recognition tasks.

18.
Opt Express ; 30(2): 1723-1736, 2022 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-35209327

RESUMO

We present an automated method for COVID-19 screening based on reconstructed phase profiles of red blood cells (RBCs) and a highly comparative time-series analysis (HCTSA). Video digital holographic data -was obtained using a compact, field-portable shearing microscope to capture the temporal fluctuations and spatio-temporal dynamics of live RBCs. After numerical reconstruction of the digital holographic data, the optical volume is calculated at each timeframe of the reconstructed data to produce a time-series signal for each cell in our dataset. Over 6000 features are extracted on the time-varying optical volume sequences using the HCTSA to quantify the spatio-temporal behavior of the RBCs, then a linear support vector machine is used for classification of individual RBCs. Human subjects are then classified for COVID-19 based on the consensus of their cells' classifications. The proposed method is tested on a dataset of 1472 RBCs from 24 human subjects (10 COVID-19 positive, 14 healthy) collected at UConn Health Center. Following a cross-validation procedure, our system achieves 82.13% accuracy, with 92.72% sensitivity, and 73.21% specificity (area under the receiver operating characteristic curve: 0.8357). Furthermore, the proposed system resulted in 21 out of 24 human subjects correctly labeled. To the best of our knowledge this is the first report of a highly comparative time-series analysis using digital holographic microscopy data.


Assuntos
COVID-19/diagnóstico por imagem , Eritrócitos/classificação , Holografia/métodos , Microscopia Intravital/métodos , COVID-19/sangue , Estudos de Casos e Controles , Desenho de Equipamento , Holografia/instrumentação , Humanos , Microscopia Intravital/instrumentação , Dados Preliminares , Curva ROC , Sensibilidade e Especificidade
19.
Opt Express ; 30(3): 4655-4658, 2022 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-35209697

RESUMO

This Feature Issue of Optics Express is organized in conjunction with the 2021 Optica (OSA) conference on 3D Image Acquisition and Display: Technology, Perception and Applications which was held virtually from 19 to 23, July 2021 as part of the Imaging and Sensing Congress 2021. This Feature Issue presents 29 articles which cover the topics and scope of the 2021 3D conference. This Introduction provides a summary of these articles.

20.
IEEE J Biomed Health Inform ; 26(3): 1318-1328, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34388103

RESUMO

This study presents a novel approach to automatically perform instant phenotypic assessment of red blood cell (RBC) storage lesion in phase images obtained by digital holographic microscopy. The proposed model combines a generative adversarial network (GAN) with marker-controlled watershed segmentation scheme. The GAN model performed RBC segmentations and classifications to develop ageing markers, and the watershed segmentation was used to completely separate overlapping RBCs. Our approach achieved good segmentation and classification accuracy with a Dice's coefficient of 0.94 at a high throughput rate of about 152 cells per second. These results were compared with other deep neural network architectures. Moreover, our image-based deep learning models recognized the morphological changes that occur in RBCs during storage. Our deep learning-based classification results were in good agreement with previous findings on the changes in RBC markers (dominant shapes) affected by storage duration. We believe that our image-based deep learning models can be useful for automated assessment of RBC quality, storage lesions for safe transfusions, and diagnosis of RBC-related diseases.


Assuntos
Aprendizado Profundo , Holografia , Envelhecimento , Eritrócitos , Holografia/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA