RESUMEN
High quality, large size volumetric imaging of biological tissue with optical coherence tomography (OCT) requires large number and high density of scans, which results in large data acquisition volume. This may lead to corruption of the data with motion artifacts related to natural motion of biological tissue, and could potentially cause conflicts with the maximum permissible exposure of biological tissue to optical radiation. Therefore, OCT can benefit greatly from different approaches to sparse or compressive sampling of the data where the signal is recovered from its sub-Nyquist measurements. In this paper, a new energy-guided compressive sensing approach is proposed for improving the quality of images acquired with Fourier domain OCT (FD-OCT) and reconstructed from sparse data sets. The proposed algorithm learns an optimized sampling probability density function based on the energy distribution of the training data set, which is then used for sparse sampling instead of the commonly used uniformly random sampling. It was demonstrated that the proposed energy-guided learning approach to compressive FD-OCT of retina images requires 45% fewer samples in comparison with the conventional uniform compressive sensing (CS) approach while achieving similar reconstruction performance. This novel approach to sparse sampling has the potential to significantly reduce data acquisition while maintaining image quality.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Coherencia Óptica/instrumentación , Tomografía de Coherencia Óptica/métodos , Algoritmos , Artefactos , Técnicas Biosensibles/métodos , Córnea/patología , Diagnóstico por Imagen/métodos , Dedos , Análisis de Fourier , Humanos , Modelos Estadísticos , Probabilidad , Retina/patología , Vasos Retinianos/patología , Relación Señal-RuidoRESUMEN
Compressive fluorescence microscopy has been proposed as a promising approach for fast acquisitions at sub-Nyquist sampling rates. Given that signal-to-noise ratio (SNR) is very important in the design of fluorescence microscopy systems, a new saliency-guided sparse reconstruction ensemble fusion system has been proposed for improving SNR in compressive fluorescence microscopy. This system produces an ensemble of sparse reconstructions using adaptively optimized probability density functions derived based on underlying saliency rather than the common uniform random sampling approach. The ensemble of sparse reconstructions are then fused together via ensemble expectation merging. Experimental results using real fluorescence microscopy data sets show that significantly improved SNR can be achieved when compared to existing compressive fluorescence microscopy approaches, with SNR increases of 16-9 dB within the noise range of 1.5%-10% standard deviation at the same compression rate.
RESUMEN
An important image post-processing step for optical coherence tomography (OCT) images is speckle noise reduction. Noise in OCT images is multiplicative in nature and is difficult to suppress due to the fact that in addition the noise component, OCT speckle also carries structural information about the imaged object. To address this issue, a novel speckle noise reduction algorithm was developed. The algorithm projects the imaging data into the logarithmic space and a general Bayesian least squares estimate of the noise-free data is found using a conditional posterior sampling approach. The proposed algorithm was tested on a number of rodent (rat) retina images acquired in-vivo with an ultrahigh resolution OCT system. The performance of the algorithm was compared to that of the state-of-the-art algorithms currently available for speckle denoising, such as the adaptive median, maximum a posteriori (MAP) estimation, linear least squares estimation, anisotropic diffusion and wavelet-domain filtering methods. Experimental results show that the proposed approach is capable of achieving state-of-the-art performance when compared to the other tested methods in terms of signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), edge preservation, and equivalent number of looks (ENL) measures. Visual comparisons also show that the proposed approach provides effective speckle noise suppression while preserving the sharpness and improving the visibility of morphological details, such as tiny capillaries and thin layers in the rat retina OCT images.
Asunto(s)
Artefactos , Retina/anatomía & histología , Tomografía de Coherencia Óptica/métodos , Algoritmos , Animales , Teorema de Bayes , Coroides/irrigación sanguínea , RatasRESUMEN
In this paper, we address the hyperspectral image (HSI) classification task with a generative adversarial network and conditional random field (GAN-CRF)-based framework, which integrates a semisupervised deep learning and a probabilistic graphical model, and make three contributions. First, we design four types of convolutional and transposed convolutional layers that consider the characteristics of HSIs to help with extracting discriminative features from limited numbers of labeled HSI samples. Second, we construct semisupervised generative adversarial networks (GANs) to alleviate the shortage of training samples by adding labels to them and implicitly reconstructing real HSI data distribution through adversarial training. Third, we build dense conditional random fields (CRFs) on top of the random variables that are initialized to the softmax predictions of the trained GANs and are conditioned on HSIs to refine classification maps. This semisupervised framework leverages the merits of discriminative and generative models through a game-theoretical approach. Moreover, even though we used very small numbers of labeled training HSI samples from the two most challenging and extensively studied datasets, the experimental results demonstrated that spectral-spatial GAN-CRF (SS-GAN-CRF) models achieved top-ranking accuracy for semisupervised HSI classification.
RESUMEN
Retinal layer thickness, evaluated as a function of spatial position from optical coherence tomography (OCT) images is an important diagnostics marker for many retinal diseases. However, due to factors such as speckle noise, low image contrast, irregularly shaped morphological features such as retinal detachments, macular holes, and drusen, accurate segmentation of individual retinal layers is difficult. To address this issue, a computer method for retinal layer segmentation from OCT images is presented. An efficient two-step kernel-based optimization scheme is employed to first identify the approximate locations of the individual layers, which are then refined to obtain accurate segmentation results for the individual layers. The performance of the algorithm was tested on a set of retinal images acquired in-vivo from healthy and diseased rodent models with a high speed, high resolution OCT system. Experimental results show that the proposed approach provides accurate segmentation for OCT images affected by speckle noise, even in sub-optimal conditions of low image contrast and presence of irregularly shaped structural features in the OCT images.
Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Retina/patología , Degeneración Retiniana/patología , Técnica de Sustracción , Tomografía de Coherencia Óptica/métodos , Animales , Inteligencia Artificial , Aumento de la Imagen/métodos , RatasRESUMEN
Although computer simulations indicate that mitosis may be important to the mechanics of morphogenetic movements, algorithms to identify mitoses in bright field images of embryonic epithelia have not previously been available. Here, the authors present an algorithm that identifies mitoses and their orientations based on the motion field between successive images. Within this motion field, the algorithm seeks 'mitosis motion field prototypes' characterised by convergent motion in one direction and divergent motion in the orthogonal direction, the local motions produced by the division process. The algorithm uses image processing, vector field analyses and pattern recognition to identify occurrences of this prototype and to determine its orientation. When applied to time-lapse images of gastrulation and neurulation-stage amphibian (Ambystoma mexicanum) embryos, the algorithm achieves identification accuracies of 68 and 67%, respectively and angular accuracies of the order of 30 degrees , values sufficient to assess the role of mitosis in these developmental processes.
Asunto(s)
Epitelio/embriología , Mitosis , Algoritmos , Ambystoma mexicanum/embriología , Animales , Ingeniería Biomédica , Simulación por Computador , Desarrollo Embrionario , Gastrulación , Procesamiento de Imagen Asistido por Computador , Modelos Biológicos , Movimiento , NeurulaciónRESUMEN
This paper proposes an image segmentation method named iterative region growing using semantics (IRGS), which is characterized by two aspects. First, it uses graduated increased edge penalty (GIEP) functions within the traditional Markov random field (MRF) context model in formulating the objective functions. Second, IRGS uses a region growing technique in searching for the solutions to these objective functions. The proposed IRGS is an improvement over traditional MRF based approaches in that the edge strength information is utilized and a more stable estimation of model parameters is achieved. Moreover, the IRGS method provides the possibility of building a hierarchical representation of the image content, and allows various region features and even domain knowledge to be incorporated in the segmentation process. The algorithm has been successfully tested on several artificial images and synthetic aperture radar (SAR) images.
Asunto(s)
Algoritmos , Inteligencia Artificial , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Reproducibilidad de los Resultados , Sensibilidad y EspecificidadRESUMEN
Veredas (palm swamps) are wetland complexes associated with the Brazilian savanna (cerrado) that often represent the only available source of water for the ecosystem during the dry months. Their extent and condition are mainly unknown and their cartography is an essential issue for their protection. This research article evaluates some of the fine resolution satellite data both in the radar (Radarsat-1) and optical domain (ASTER) for the delineation and characterization of veredas. Two separate approaches are evaluated. First, given the known potential of Radarsat-1 images for wetland inventories, the automatic delineation of veredas is tested using only Radarsat-1 data and a Markov random fields region-based segmentation. In this case, to increase performance, processing is limited to a buffer zone around the river network. Then, characterization of their type is attempted using traditional classification methods of ASTER optical data combined with Radarsat-1 data. The automatic classification of Radarsat data yielded results with an overall accuracy between 62 and 69%, that proved reliable enough for delineating wide and very humid veredas. Scenes from the wet season and with a smaller angle of incidence systematically yielded better results. For the classification of the main vegetation types, better results (overall success of 78.8%) were obtained by using only the visible and near infrared (VNIR) bands of the ASTER image. Radarsat data did not bring any improvement to these classification results. In fact, when using solely the Radarsat data from two different angle of incidence and two different dates, the classification results were low (50.8%) but remained powerful for delineating the permanently moist riparian forest portion of the veredas with an accuracy better than 75% in most cases. These results are considered good given the width of some types often less than 50 m wide compared with the resolution of the images (12.5 - 15 m). Comparing the classification results with the Radarsat-generated delineation allows an understanding of the relation between synthetic aperture radar (SAR) backscattering and vegetation types of the veredas.
RESUMEN
Recent computational and analytical studies have shown that cellular fabric-as embodied by average cell size, aspect ratio and orientation-is a key indicator of the stresses acting in an embryonic epithelium. Cellular fabric in real embryonic tissues could not previously be measured automatically because the cell boundaries tend to be poorly defined, significant lighting and cell pigmentation differences occur and tissues contain a variety of cell geometries. To overcome these difficulties, four algorithms were developed: least squares ellipse fitting (LSEF), area moments (AM), correlation and axes search (CAS) and Gabor filters (GF). The AM method was found to be the most reliable of these methods, giving typical cell size, aspect ratio and orientation errors of 18%, 0.10 and 7.4 degrees, respectively, when evaluated against manually segmented images. The power of the AM algorithm to provide new insights into the mechanics of morphogenesis is demonstrated through a brief investigation of gastrulation, where fabric data suggest that key gastrulation movements are driven by epidermal tensions circumferential to the blastopore.
Asunto(s)
Desarrollo Embrionario/fisiología , Células Epiteliales/fisiología , Epitelio/embriología , Modelos Biológicos , Simulación por Computador , HumanosRESUMEN
Cardiovascular monitoring is important to prevent diseases from progressing. The jugular venous pulse (JVP) waveform offers important clinical information about cardiac health, but is not routinely examined due to its invasive catheterisation procedure. Here, we demonstrate for the first time that the JVP can be consistently observed in a non-contact manner using a photoplethysmographic imaging system. The observed jugular waveform was strongly negatively correlated to the arterial waveform (r = -0.73 ± 0.17), consistent with ultrasound findings. Pulsatile venous flow was observed over a spatially cohesive region of the neck. Critical inflection points (c, x, v, y waves) of the JVP were observed across all participants. The anatomical locations of the strongest pulsatile venous flow were consistent with major venous pathways identified through ultrasound.
Asunto(s)
Determinación de la Presión Sanguínea/métodos , Hemodinámica , Venas Yugulares/diagnóstico por imagen , Adolescente , Adulto , Niño , Femenino , Humanos , Masculino , Persona de Mediana Edad , Cuello/irrigación sanguínea , Cuello/diagnóstico por imagen , Análisis de la Onda del Pulso , Adulto JovenRESUMEN
Research was conducted to develop a methodology to model the emotional content of music as a function of time and musical features. Emotion is quantified using the dimensions valence and arousal, and system-identification techniques are used to create the models. Results demonstrate that system identification provides a means to generalize the emotional content for a genre of music. The average R2 statistic of a valid linear model structure is 21.9% for valence and 78.4% for arousal. The proposed method of constructing models of emotional content generalizes previous time-series models and removes ambiguity from classifiers of emotion.
Asunto(s)
Inteligencia Artificial , Emociones/fisiología , Modelos Psicológicos , Música/psicología , Reconocimiento de Normas Patrones Automatizadas/métodos , Psicometría/métodos , Simulación por Computador , HumanosRESUMEN
Photoplethysmographic imaging is an optical solution for non-contact cardiovascular monitoring from a distance. This camera-based technology enables physiological monitoring in situations where contact-based devices may be problematic or infeasible, such as ambulatory, sleep, and multi-individual monitoring. However, automatically extracting the blood pulse waveform signal is challenging due to the unknown mixture of relevant (pulsatile) and irrelevant pixels in the scene. Here, we propose a signal fusion framework, FusionPPG, for extracting a blood pulse waveform signal with strong temporal fidelity from a scene without requiring anatomical priors. The extraction problem is posed as a Bayesian least squares fusion problem, and solved using a novel probabilistic pulsatility model that incorporates both physiologically derived spectral and spatial waveform priors to identify pulsatility characteristics in the scene. Evaluation was performed on a 24-participant sample with various ages (9-60 years) and body compositions (fat% 30.0 ± 7.9, muscle% 40.4 ± 5.3, BMI 25.5 ± 5.2 kg·m-2). Experimental results show stronger matching to the ground-truth blood pulse waveform signal compared to the FaceMeanPPG (p < 0.001) and DistancePPG (p < 0.001) methods. Heart rates predicted using FusionPPG correlated strongly with ground truth measurements (r2 = 0.9952). A cardiac arrhythmia was visually identified in FusionPPG's waveform via temporal analysis.
RESUMEN
Photoplethysmographic imaging (PPGI) is a widefield noncontact biophotonic technology able to remotely monitor cardiovascular function over anatomical areas. Although spatial context can provide insight into physiologically relevant sampling locations, existing PPGI systems rely on coarse spatial averaging with no anatomical priors for assessing arterial pulsatility. Here, we developed a continuous probabilistic pulsatility model for importance-weighted blood pulse waveform extraction. Using a data-driven approach, the model was constructed using a 23 participant sample with a large demographic variability (11/12 female/male, age 11 to 60 years, BMI 16.4 to 35.1??kg·m?2). Using time-synchronized ground-truth blood pulse waveforms, spatial correlation priors were computed and projected into a coaligned importance-weighted Cartesian space. A modified ParzenRosenblatt kernel density estimation method was used to compute the continuous resolution-agnostic probabilistic pulsatility model. The model identified locations that consistently exhibited pulsatility across the sample. Blood pulse waveform signals extracted with the model exhibited significantly stronger temporal correlation (W=35,p<0.01) and spectral SNR (W=31,p<0.01) compared to uniform spatial averaging. Heart rate estimation was in strong agreement with true heart rate [r2=0.9619, error (?,?)=(0.52,1.69) bpm].
Asunto(s)
Frecuencia Cardíaca/fisiología , Modelos Estadísticos , Imagen Óptica/métodos , Fotopletismografía/métodos , Procesamiento de Señales Asistido por Computador , Adolescente , Adulto , Algoritmos , Niño , Femenino , Humanos , Masculino , Persona de Mediana Edad , Relación Señal-Ruido , Adulto JovenRESUMEN
In medical image analysis, registration of multimodal images has been challenging due to the complex intensity relationship between images. Classical multi-modal registration approaches evaluate the degree of the alignment by measuring the statistical dependency of the intensity values between images to be aligned. Employing statistical similarity measures, such as mutual information, is not promising in those cases with complex and spatially dependent intensity relations. A new similarity measure is proposed based on the assessing the similarity of pixels within an image, based on the idea that similar structures in an image are more probable to undergo similar intensity transformations. The most significant pixel similarity values are considered to transmit the most significant self-similarity information. The proposed method is employed in a framework to register different modalities of real brain scans and the performance of the method is compared to the conventional multi-modal registration approach. Quantitative evaluation of the method demonstrates the better registration accuracy in both rigid and non-rigid deformations.
Asunto(s)
Encéfalo/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador , Imagen Multimodal , HumanosRESUMEN
The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging.
Asunto(s)
Color , Procesamiento de Imagen Asistido por Computador , Modelos TeóricosRESUMEN
A design-based method to fuse Gabor filter and grey level co-occurrence probability (GLCP) features for improved texture recognition is presented. The fused feature set utilizes both the Gabor filter's capability of accurately capturing lower and mid-frequency texture information and the GLCP's capability in texture information relevant to higher frequency components. Evaluation methods include comparing feature space separability and comparing image segmentation classification rates. The fused feature sets are demonstrated to produce higher feature space separations, as well as higher segmentation accuracies relative to the individual feature sets. Fused feature sets also outperform individual feature sets for noisy images, across different noise magnitudes. The curse of dimensionality is demonstrated not to affect segmentation using the proposed the 48-dimensional fused feature set. Gabor magnitude responses produce higher segmentation accuracies than linearly normalized Gabor magnitude responses. Feature reduction using principal component analysis is acceptable for maintaining the segmentation performance, but feature reduction using the feature contrast method dramatically reduced the segmentation accuracy. Overall, the designed fused feature set is advocated as a means for improving texture segmentation performance.
Asunto(s)
Algoritmos , Inteligencia Artificial , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Almacenamiento y Recuperación de la Información/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Simulación por Computador , Modelos EstadísticosRESUMEN
We propose a simple yet effective structure-guided statistical textural distinctiveness approach to salient region detection. Our method uses a multilayer approach to analyze the structural and textural characteristics of natural images as important features for salient region detection from a scale point of view. To represent the structural characteristics, we abstract the image using structured image elements and extract rotational-invariant neighborhood-based textural representations to characterize each element by an individual texture pattern. We then learn a set of representative texture atoms for sparse texture modeling and construct a statistical textural distinctiveness matrix to determine the distinctiveness between all representative texture atom pairs in each layer. Finally, we determine saliency maps for each layer based on the occurrence probability of the texture atoms and their respective statistical textural distinctiveness and fuse them to compute a final saliency map. Experimental results using four public data sets and a variety of performance evaluation metrics show that our approach provides promising results when compared with existing salient region detection approaches.
RESUMEN
A set of high-level intuitive features (HLIFs) is proposed to quantitatively describe melanoma in standard camera images. Melanoma is the deadliest form of skin cancer. With rising incidence rates and subjectivity in current clinical detection methods, there is a need for melanoma decision support systems. Feature extraction is a critical step in melanoma decision support systems. Existing feature sets for analyzing standard camera images are comprised of low-level features, which exist in high-dimensional feature spaces and limit the system's ability to convey intuitive diagnostic rationale. The proposed HLIFs were designed to model the ABCD criteria commonly used by dermatologists such that each HLIF represents a human-observable characteristic. As such, intuitive diagnostic rationale can be conveyed to the user. Experimental results show that concatenating the proposed HLIFs with a full low-level feature set increased classification accuracy, and that HLIFs were able to separate the data better than low-level features with statistical significance. An example of a graphical interface for providing intuitive rationale is given.
Asunto(s)
Dermoscopía/métodos , Interpretación de Imagen Asistida por Computador/métodos , Melanoma/diagnóstico , Neoplasias Cutáneas/diagnóstico , HumanosRESUMEN
Photoplethysmography (PPG) devices are widely used for monitoring cardiovascular function. However, these devices require skin contact, which restricts their use to at-rest short-term monitoring. Photoplethysmographic imaging (PPGI) has been recently proposed as a non-contact monitoring alternative by measuring blood pulse signals across a spatial region of interest. Existing systems operate in reflectance mode, many of which are limited to short-distance monitoring and are prone to temporal changes in ambient illumination. This paper is the first study to investigate the feasibility of long-distance non-contact cardiovascular monitoring at the supermeter level using transmittance PPGI. For this purpose, a novel PPGI system was designed at the hardware and software level. Temporally coded illumination (TCI) is proposed for ambient correction, and a signal processing pipeline is proposed for PPGI signal extraction. Experimental results show that the processing steps yielded a substantially more pulsatile PPGI signal than the raw acquired signal, resulting in statistically significant increases in correlation to ground-truth PPG in both short- and long-distance monitoring. The results support the hypothesis that long-distance heart rate monitoring is feasible using transmittance PPGI, allowing for new possibilities of monitoring cardiovascular function in a non-contact manner.
Asunto(s)
Diagnóstico por Imagen , Frecuencia Cardíaca/fisiología , Monitoreo Fisiológico , Fotopletismografía/métodos , Adulto , Estudios de Factibilidad , Femenino , Humanos , Iluminación , Masculino , Procesamiento de Señales Asistido por ComputadorRESUMEN
Features based on Markov random field (MRF) models are sensitive to texture rotation. This paper develops an anisotropic circular Gaussian MRF (ACGMRF) model for retrieving rotation-invariant texture features. To overcome the singularity problem of the least squares estimate method, an approximate least squares estimate method is designed and implemented. Rotation-invariant features are obtained from the ACGMRF model parameters using the discrete Fourier transform. The ACGMRF model is demonstrated to be a statistical improvement over three published methods. The three methods include a Laplacian pyramid, an isotropic circular GMRF (ICGMRF), and gray level cooccurrence probability features.