RESUMO
Deep artificial neural network learning is an emerging tool in image analysis. We demonstrate its potential in the field of digital holographic microscopy by addressing the challenging problem of determining the in-focus reconstruction depth of Madin-Darby canine kidney cell clusters encoded in digital holograms. A deep convolutional neural network learns the in-focus depths from half a million hologram amplitude images. The trained network correctly determines the in-focus depth of new holograms with high probability, without performing numerical propagation. This paper reports on extensions to preliminary work published earlier as one of the first applications of deep learning in the field of digital holographic microscopy.
RESUMO
Water-related diseases affect societies in all parts of the world. Online sensors are considered a solution to the problems associated with laboratory testing in potable water. One of the most active research areas of such online sensors has been within optics. Digital holographic microscopy (DHM) has the potential to rival state-of-the-art techniques such as advanced turbidity measurement. However, its use as an online sensor is limited by the large data requirements typical for digital holographic video. In this paper, we provide a solution that permits DHM to be applied to a whole class of online remote sensor networks, of which potable water analysis is one example. The designed sensors incorporate a novel space-variant quantization algorithm to preprocess each frame of a video sequence before transmission over a network. The system satisfies the generally accepted requirements of an online system: automated, near real-time, and operating in a real environment. To verify the effectiveness of the design, we implemented and evaluated it in an active potable water facility.
Assuntos
Água Potável/química , Holografia/métodos , Microscopia/métodos , Algoritmos , Compressão de Dados , Desenho de Equipamento , Processamento de Sinais Assistido por Computador/instrumentaçãoRESUMO
We investigated the question of how the perception of three-dimensional information reconstructed numerically from digital holograms of real-world objects, and presented on conventional displays, depends on motion and stereoscopic presentation. Perceived depth in an adjustable random pattern stereogram was matched to the depth in hologram reconstructions. The objects in holograms were a microscopic biological cell and a macroscopic metal coil. For control, we used real physical objects in additional to hologram reconstructions of real objects. Stereoscopic presentation increased perceived depth substantially in comparison to non-stereoscopic presentation. When stereoscopic cues were weak or absent e.g. because of blur, motion increased perceived depth considerably. However, when stereoscopic cues were strong, the effect of motion was small. In conclusion, for the maximization of perceived three-dimensional information of holograms on conventional displays, it seems highly beneficial to use the combination of motion and stereoscopic presentation.
Assuntos
Percepção de Profundidade/fisiologia , Holografia/instrumentação , Holografia/métodos , Aumento da Imagem/métodos , Imageamento Tridimensional/métodos , Percepção de Movimento/fisiologia , Análise de Fourier , Humanos , Células K562RESUMO
Depth extraction is an important aspect of three-dimensional (3D) image processing with digital holograms and an essential step in extended focus imaging and metrology. All available depth extraction techniques with macroscopic objects are based on variance; however, the effectiveness of this is object dependent. We propose to use disparity between corresponding points in intensity reconstructions to determine depth. Our method requires a single hologram of a scene, from which we reconstruct two different perspectives. In the reconstruction the phase information is not needed, which makes this method useful for in-line digital holography. To our knowledge disparity based 3D image processing has never been proposed before for digital holography.
RESUMO
A 3D scene is synthesized combining multiple optically recorded digital holograms of different objects. The novel idea consists of compositing moving 3D objects in a dynamic 3D scene using a process that is analogous to stop-motion video. However in this case the movie has the exciting attribute that it can be displayed and observed in 3D. We show that 3D dynamic scenes can be projected as an alternative to complicated and heavy computations needed to generate realistic-looking computer generated holograms. The key tool for creating the dynamic action is based on a new concept that consists of a spatial, adaptive transformation of digital holograms of real-world objects allowing full control in the manipulation of the object's position and size in a 3D volume with very high depth-of-focus. A pilot experiment to evaluate how viewers perceive depth in a conventional single-view display of the dynamic 3D scene has been performed.
RESUMO
A method to measure the size, orientation, and location of opaque micro-fibers using digital holography is presented. The method involves the recording of a digital hologram followed by reconstruction at different depths. A novel combination of automated image analysis and statistical techniques, applied on the intensity of reconstructed digital holograms is used to accurately determine the characteristics of the micro-fibers. The performance of the proposed method is verified with a single fiber of known length and orientation. The potential of the method for measurement of fiber length is further demonstrated through its application to a suspension of fibers in a liquid medium.
Assuntos
Algoritmos , Holografia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Aumento da Imagem/métodos , Microesferas , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php.
Assuntos
Biologia Computacional/métodos , Genômica/métodos , Filogenia , Algoritmos , Animais , Simulação por Computador , Computadores , Metodologias Computacionais , Bases de Dados Genéticas , Humanos , Internet , Funções Verossimilhança , Alinhamento de Sequência , Software , Interface Usuário-ComputadorRESUMO
We present a parallel implementation of the Fresnel transform suitable for reconstructing large digital holograms. Our method has a small memory footprint and utilizes the spare resources of a distributed set of desktop PCs connected by a network. We show how we parallelize the Fresnel transform and discuss how it is constrained by computer and communication resources. Finally, we demonstrate how a 4.3 gigapixel digital hologram can be reconstructed and how the efficiency of the method changes for different memory and processor configurations.
Assuntos
Algoritmos , Holografia/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Processamento de Sinais Assistido por Computador , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
We perform a numerical analysis of the double random phase encryption-decryption technique to determine how, in the case of both amplitude and phase encoding, the two decryption keys (the image- and Fourier-plane keys) affect the output gray-scale image when they are in error. We perform perfect encryption and imperfect decryption. We introduce errors into the decrypting keys that correspond to the use of random distributions of incorrect pixel values. We quantify the effects that increasing amounts of error in the image-plane key, the Fourier-plane key, and both keys simultaneously have on the decrypted image. Quantization effects are also examined.
RESUMO
Several attacks are proposed against the double random phase encryption scheme. These attacks are demonstrated on computer-generated ciphered images. The scheme is shown to be resistant against brute force attacks but susceptible to chosen and known plaintext attacks. In particular, we describe a technique to recover the exact keys with only two known plain images. We compare this technique to other attacks proposed in the literature.
RESUMO
We present a novel nonuniform quantization compression technique-histogram quantization-for digital holograms of 3-D real-world objects. We exploit a priori knowledge of the distribution of the values in our data. We compare this technique to another histogram based approach: a modified version of Max's algorithm that has been adapted in a straight-forward manner to complex-valued 2-D signals. We conclude the compression procedure by applying lossless techniques to our quantized data. We demonstrate improvements over previous results obtained by applying uniform and nonuniform quantization techniques to the hologram data.
Assuntos
Algoritmos , Artefatos , Compressão de Dados/métodos , Holografia/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Gráficos por Computador , Interpretação Estatística de Dados , Análise Numérica Assistida por ComputadorRESUMO
BACKGROUND: In recent years, model based approaches such as maximum likelihood have become the methods of choice for constructing phylogenies. A number of authors have shown the importance of using adequate substitution models in order to produce accurate phylogenies. In the past, many empirical models of amino acid substitution have been derived using a variety of different methods and protein datasets. These matrices are normally used as surrogates, rather than deriving the maximum likelihood model from the dataset being examined. With few exceptions, selection between alternative matrices has been carried out in an ad hoc manner. RESULTS: We start by highlighting the potential dangers of arbitrarily choosing protein models by demonstrating an empirical example where a single alignment can produce two topologically different and strongly supported phylogenies using two different arbitrarily-chosen amino acid substitution models. We demonstrate that in simple simulations, statistical methods of model selection are indeed robust and likely to be useful for protein model selection. We have investigated patterns of amino acid substitution among homologous sequences from the three Domains of life and our results show that no single amino acid matrix is optimal for any of the datasets. Perhaps most interestingly, we demonstrate that for two large datasets derived from the proteobacteria and archaea, one of the most favored models in both datasets is a model that was originally derived from retroviral Pol proteins. CONCLUSION: This demonstrates that choosing protein models based on their source or method of construction may not be appropriate.
Assuntos
Substituição de Aminoácidos/genética , Biologia Computacional/métodos , Bases de Dados Genéticas , Evolução Molecular , Filogenia , Animais , Archaea/química , Archaea/genética , Funções Verossimilhança , Cadeias de Markov , Modelos Genéticos , Proteínas/química , Proteínas/genética , Proteobactérias/química , Proteobactérias/genética , Reprodutibilidade dos Testes , Alinhamento de Sequência , Vertebrados/genéticaRESUMO
We present the results of what we believe is the first application of wavelet analysis to the compression of complex-valued digital holograms of three-dimensional real-world objects. We achieve compression through thresholding and quantization of the wavelet coefficients, followed by lossless encoding of the quantized data.
RESUMO
We apply two novel nonuniform quantization techniques to digital holograms of three-dimensional real-world objects. Our companding approach, combines the efficiency of uniform quantization with the improved performance of nonuniform quantization. We show that the performance of companding techniques can be comparable with k-means clustering and a competitive neural network, while only requiring a single-pass processing step. The quantized holographic pixels are coded using lossless techniques for the calculation of compression ratio.
RESUMO
The Fourier plane encryption algorithm is subjected to a known-plaintext attack. The simulated annealing heuristic algorithm is used to estimate the key, using a known plaintext-ciphertext pair, which decrypts the ciphertext with arbitrarily low error. The strength of the algorithm is tested by using this estimated key to decrypt a different ciphertext which was also encrypted using the same original key. We assume that the plaintext is amplitude-encoded real-valued image, and analyze only the mathematical algorithm rather than a real optical system that can be more secure. The Fourier plane encryption algorithm is found to be susceptible to a known-plaintext heuristic attack.
RESUMO
We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms.
RESUMO
The amplitude-encoding case of the double random phase encoding technique is examined by defining a cost function as a metric to compare an attempted decryption against the corresponding original input image. For the case when a cipher-text pair has been obtained and the correct decryption key is unknown, an iterative attack technique can be employed to ascertain the key. During such an attack the noise in the output field for an attempted decryption can be used as a measure of a possible decryption key's correctness. For relatively small systems, i.e., systems involving fewer than 5x5 pixels, the output decryption of every possible key can be examined to evaluate the distribution of the keys in key space in relation to their relative performance when carrying out decryption. However, in order to do this for large systems, checking every single key is currently impractical. One metric used to quantify the correctness of a decryption key is the normalized root mean squared (NRMS) error. The NRMS is a measure of the cumulative intensity difference between the input and decrypted images. We identify a core term in the NRMS, which we refer to as the difference parameter, d. Expressions for the expected value (or mean) and variance of d are derived in terms of the mean and variance of the output field noise, which is shown to be circular Gaussian. These expressions assume a large sample set (number of pixels and keys). We show that as we increase the number of samples used, the decryption error obeys the statistically predicted characteristic values. Finally, we corroborate previously reported simulations in the literature by using the statistically derived expressions.
RESUMO
We analyze optical encryption systems using the techniques of conventional cryptography. All conventional block encryption algorithms are vulnerable to attack, and often they employ secure modes of operation as one way to increase security. We introduce the concept of conventional secure modes to optical encryption and analyze the results in the context of known conventional and optical attacks. We consider only the optical system "double random phase encoding," which forms the basis for a large number of optical encryption, watermarking, and multiplexing systems. We consider all attacks proposed to date in one particular scenario. We analyze only the mathematical algorithms themselves and do not consider the additional security that arises from employing these algorithms in physical optical systems.
RESUMO
The signal extraction method based on intensity measurements in two close fractional Fourier domains is examined by using the phase space formalism. The fractional order separation has a lower bound and an upper bound that depend on the signal at hand and the noise in the optical system used for measurement. On the basis of a theoretical analysis, it is shown that for a given optical system a judicious choice of fractional order separation requires some a priori knowledge of the signal bandwidth. We also present some experimental results in support of the analysis.
RESUMO
When a digital hologram is reconstructed, only points located at the reconstruction distance are in focus. We have developed a novel technique for creating an in-focus image of the macroscopic objects encoded in a digital hologram. This extended focused image is created by combining numerical reconstructions with depth information extracted by using our depth-from-focus algorithm. To our knowledge, this is the first technique that creates extended focused images of digital holograms encoding macroscopic objects. We present results for digital holograms containing low- and high-contrast macroscopic objects.