RESUMEN
Non-destructive testing (NDT) by x-ray imaging is commonly used for finding manufacturing defects, cargo inspection, or security screening. These tasks can be regarded as examples of a detection problem where a target is either present or not. Task-specific information (TSI) [J. Opt. Soc. Am. A24, B25 (2007)JOAOD60740-323210.1364/JOSAA.24.000B25; Appl. Opt.47, 4457 (2008)APOPAI0003-693510.1364/AO.47.004457] bounds, an information-theoretic based metric, are presented for a threat detection task. A system using polychromatic x-ray pencil beam object illumination and energy-resolving detectors for both absorption and diffraction measurements is employed for this task. Water and diesel are two liquids chosen as non-threat and threat materials, respectively, for this study. Three different threat class configurations are examined: a homogeneous object with fixed thickness, a homogeneous object with stochastic thickness, and a dual-material object (i.e., representing a target and clutter) with stochastic thickness, where the threat material has a fixed thickness. We find for the threat class composed of a dual-material object that a minimum threat thickness of 4.5 cm is needed to achieve a desired TSI≥0.7 using a joint absorption and diffraction measurement.
RESUMEN
Here, we present the engineering trade studies of a free-space optical communication system operating over a 30 km maritime channel for the months of January and July. The system under study follows the BB84 protocol with the following assumptions: a weak coherent source is used, Eve is performing the intercept resend attack and photon number splitting attack, prior knowledge of Eve's location is known, and Eve is allowed to know a small percentage of the final key. In this system, we examine the effect of changing several parameters in the following areas: the implementation of the BB84 protocol over the public channel, the technology in the receiver, and our assumptions about Eve. For each parameter, we examine how different values impact the secure key rate for a constant brightness. Additionally, we will optimize the brightness of the source for each parameter to study the improvement in the secure key rate.
RESUMEN
We investigate a multiple spatial modes based quantum key distribution (QKD) scheme that employs multiple independent parallel beams through a marine free-space optical channel over open ocean. This approach provides the potential to increase secret key rate (SKR) linearly with the number of channels. To improve the SKR performance, we describe a back-propagation mode (BPM) method to mitigate the atmospheric turbulence effects. Our simulation results indicate that the secret key rate can be improved significantly by employing the proposed BPM-based multi-channel QKD scheme.
RESUMEN
We theoretically investigate and experimentally demonstrate a RF-assisted four-state continuous-variable quantum key distribution (CV-QKD) system. Classical coherent detection is implemented with a simple digital phase noise cancelation scheme. In the proposed system, there is no need for frequency and phase locking between the quantum signals and the local oscillator laser. Moreover, in principle, there is no residual phase noise, and a mean excess noise of 0.0115 (in shot-noise units) can be acquired experimentally. In addition, the minimum transmittance of 0.45 is reached experimentally for secure transmission with commercial photodetectors, and the maximum secret key rate (SKR) of >12 Mbit/s can be obtained. The proposed RF-assisted CV-QKD system opens the door of incorporating microwave photonics into a CV-QKD system and improving the SKR significantly.
RESUMEN
We experimentally demonstrate and characterize the performance of a 400-Gbit/s orbital angular momentum (OAM) multiplexed free-space optical link over 120 m on the roof of a building. Four OAM beams, each carrying a 100-Gbit/s quadrature-phase-shift-keyed channel are multiplexed and transmitted. We investigate the influence of channel impairments on the received power, intermodal crosstalk among channels, and system power penalties. Without laser tracking and compensation systems, the measured received power and crosstalk among OAM channels fluctuate by 4.5 dB and 5 dB, respectively, over 180 s. For a beam displacement of 2 mm that corresponds to a pointing error less than 16.7 µrad, the link bit error rates are below the forward error correction threshold of 3.8×10(-3) for all channels. Both experimental and simulation results show that power penalties increase rapidly when the displacement increases.
RESUMEN
Adaptive compressive measurements can offer significant system performance advantages due to online learning over non-adaptive or static compressive measurements for a variety of applications, such as image formation and target identification. However, such adaptive measurements tend to be sub-optimal due to their greedy design. Here, we propose a non-greedy adaptive compressive measurement design framework and analyze its performance for a face recognition task. While a greedy adaptive design aims to optimize the system performance on the next immediate measurement, a non-greedy adaptive design goes beyond that by strategically maximizing the system performance over all future measurements. Our non-greedy adaptive design pursues a joint optimization of measurement design and photon allocation within a rigorous information-theoretic framework. For a face recognition task, simulation studies demonstrate that the proposed non-greedy adaptive design achieves a nearly two to three fold lower probability of misclassification relative to the greedy adaptive and static designs. The simulation results are validated experimentally on a compressive optical imager testbed.
RESUMEN
We present capacity bounds of an optical system that communicates using electromagnetic waves between a transmitter and a receiver. The bounds are investigated in conjunction with a rigorous theory of degrees of freedom (DOF) in the presence of noise. By taking into account the different signal-to-noise ratio (SNR) levels, an optimal number of DOF that provides the maximum capacity is defined. We find that for moderate noise levels, the DOF estimate of the number of active modes is approximately equal to the optimum number of channels obtained by a more rigorous water-filling procedure. On the other hand, for very low- or high-SNR regions, the maximum capacity can be obtained using less or more channels compared to the number of communicating modes given by the DOF theory. In general, the capacity is shown to increase with increasing size of the transmitting and receiving volumes, whereas it decreases with an increase in the separation between volumes. Under the practical channel constraints of noise and finite available power, the capacity upper bound can be estimated by the well-known iterative water-filling solution to determine the optimal power allocation into the subchannels corresponding to the set of singular values when channel state information is known at the transmitter.
RESUMEN
We investigate the sensing of a data-carrying Gaussian beacon on a separate wavelength as a means to provide the information necessary to compensate for the effects of atmospheric turbulence on orbital angular momentum (OAM) and polarization-multiplexed beams in a free-space optical link. The influence of the Gaussian beacon's wavelength on the compensation of the OAM beams at 1560 nm is experimentally studied. It is found that the compensation performance degrades slowly with the increase in the beacon's wavelength offset, in the 1520-1590 nm band, from the OAM beams. Using this scheme, we experimentally demonstrate a 1 Tbit/s OAM and polarization-multiplexed link through emulated dynamic turbulence with a data-carrying beacon at 1550 nm. The experimental results show that the turbulence effects on all 10 data channels, each carrying a 100 Gbit/s signal, are mitigated efficiently, and the power penalties after compensation are below 5.9 dB for all channels. The results of our work might be helpful for the future implementation of a high-capacity OAM, polarization and wavelength-multiplexed free-space optical link that is affected by atmospheric turbulence.
RESUMEN
We demonstrate crosstalk mitigation using 4×4 multiple-input-multiple-output (MIMO) equalization on an orbital angular momentum (OAM) multiplexed free-space data link with heterodyne detection. Four multiplexed OAM beams, each carrying a 20 Gbit/s quadrature phase-shift keying signal, propagate through weak turbulence. The turbulence induces inter-channel crosstalk among each beam and degrades the signal performance. Experimental results demonstrate that with the assistance of MIMO processing, the signal quality and the bit-error-rate (BER) performance can be improved. The power penalty can be reduced by >4 dB at a BER of 3.8×10-3.
RESUMEN
We propose an adaptive optics compensation scheme to simultaneously compensate multiple orbital angular momentum (OAM) beams propagating through atmospheric turbulence. A Gaussian beam on one polarization is used to probe the turbulence-induced wavefront distortions and derive the correction pattern for compensating the OAM beams on the orthogonal polarization. By using this scheme, we experimentally demonstrate simultaneous compensation of multiple OAM beams, each carrying a 100 Gbit/s data channel through emulated atmospheric turbulence. The experimental results indicate that the correction pattern obtained from the Gaussian probe beam could be used to simultaneously compensate multiple turbulence-distorted OAM beams with different orders. It is found that the turbulence-induced crosstalk effects on neighboring modes are efficiently reduced by 12.5 dB, and the system power penalty is improved by 11 dB after compensation.
RESUMEN
While the theory of compressive sensing has been very well investigated in the literature, comparatively little attention has been given to the issues that arise when compressive measurements are made in hardware. For instance, compressive measurements are always corrupted by detector noise. Further, the number of photons available is the same whether a conventional image is sensed or multiple coded measurements are made in the same interval of time. Thus it is essential that the effects of noise and the constraint on the number of photons must be taken into account in the analysis, design, and implementation of a compressive imager. In this paper, we present a methodology for designing a set of measurement kernels (or masks) that satisfy the photon constraint and are optimum for making measurements that minimize the reconstruction error in the presence of noise. Our approach finds the masks one at a time, by determining the vector that yields the best possible measurement for reducing the reconstruction error. The subspace represented by the optimized mask is removed from the signal space, and the process is repeated to find the next best measurement. Results of simulations are presented that show that the optimum masks always outperform reconstructions based on traditional feature measurements (such as principle components), and are also better than the conventional images in high noise conditions.
RESUMEN
We experimentally investigate the performance of an orbital angular momentum (OAM) multiplexed free space optical (FSO) communication link through emulated atmospheric turbulence. The turbulence effects on the crosstalk and system power penalty of the FSO link are characterized. The experimental results show that the power of the transmitted OAM mode will tend to spread uniformly onto the neighboring mode in medium-to-strong turbulence, resulting in severe crosstalk at the receiver. The power penalty is found to exceed 10 dB in a weak-to-medium turbulence condition due to the turbulence-induced crosstalk and power fluctuation of the received signal.
RESUMEN
The compressive sensing paradigm exploits the inherent sparsity/compressibility of signals to reduce the number of measurements required for reliable reconstruction/recovery. In many applications additional prior information beyond signal sparsity, such as structure in sparsity, is available, and current efforts are mainly limited to exploiting that information exclusively in the signal reconstruction problem. In this work, we describe an information-theoretic framework that incorporates the additional prior information as well as appropriate measurement constraints in the design of compressive measurements. Using a Gaussian binomial mixture prior we design and analyze the performance of optimized projections relative to random projections under two specific design constraints and different operating measurement signal-to-noise ratio (SNR) regimes. We find that the information-optimized designs yield significant, in some cases nearly an order of magnitude, improvements in the reconstruction performance with respect to the random projections. These improvements are especially notable in the low measurement SNR regime where the energy-efficient design of optimized projections is most advantageous. In such cases, the optimized projection design departs significantly from random projections in terms of their incoherence with the representation basis. In fact, we find that the maximizing incoherence of projections with the representation basis is not necessarily optimal in the presence of additional prior information and finite measurement noise/error. We also apply the information-optimized projections to the compressive image formation problem for natural scenes, and the improved visual quality of reconstructed images with respect to random projections and other compressive measurement design affirms the overall effectiveness of the information-theoretic design framework.
RESUMEN
Compressive imaging systems typically exploit the spatial correlation of the scene to facilitate a lower dimensional measurement relative to a conventional imaging system. In natural time-varying scenes there is a high degree of temporal correlation that may also be exploited to further reduce the number of measurements. In this work we analyze space-time compressive imaging using Karhunen-Loève (KL) projections for the read-noise-limited measurement case. Based on a comprehensive simulation study, we show that a KL-based space-time compressive imager offers higher compression relative to space-only compressive imaging. For a relative noise strength of 10% and reconstruction error of 10%, we find that space-time compressive imaging with 8×8×16 spatiotemporal blocks yields about 292× compression compared to a conventional imager, while space-only compressive imaging provides only 32× compression. Additionally, under high read-noise conditions, a space-time compressive imaging system yields lower reconstruction error than a conventional imaging system due to the multiplexing advantage. We also discuss three electro-optic space-time compressive imaging architecture classes, including charge-domain processing by a smart focal plane array (FPA). Space-time compressive imaging using a smart FPA provides an alternative method to capture the nonredundant portions of time-varying scenes.
RESUMEN
We demonstrate a 5-GHz-broadband tunable slow-light device based on stimulated Brillouin scattering in a standard highly-nonlinear optical fiber pumped by a noise-current-modulated laser beam. The noisemodulation waveform uses an optimized pseudo-random distribution of the laser drive voltage to obtain an optimal flat-topped gain profile, which minimizes the pulse distortion and maximizes pulse delay for a given pump power. In comparison with a previous slow-modulation method, eye-diagram and signal-to-noise ratio (SNR) analysis show that this broadband slow-light technique significantly increases the fidelity of a delayed data sequence, while maintaining the delay performance. A fractional delay of 0.81 with a SNR of 5.2 is achieved at the pump power of 350 mW using a 2-km-long highly nonlinear fiber with the fast noise-modulation method, demonstrating a 50% increase in eye-opening and a 36% increase in SNR in the comparison.
Asunto(s)
Tecnología de Fibra Óptica/instrumentación , Rayos Láser , Refractometría/instrumentación , Telecomunicaciones/instrumentación , Diseño Asistido por Computadora , Diseño de Equipo , Análisis de Falla de Equipo , MicroondasRESUMEN
The inherent redundancy in natural scenes forms the basis of compressive imaging where the number of measurements is less than the dimensionality of the scene. The compressed sensing theory has shown that a purely random measurement basis can yield good reconstructions of sparse objects with relatively few measurements. However, additional prior knowledge about object statistics that is typically available is not exploited in the design of the random basis. In this work, we describe a hybrid measurement basis design that exploits the power spectral density statistics of natural scenes to minimize the reconstruction error by employing an optimal combination of a nonrandom basis and a purely random basis. Using simulation studies, we quantify the reconstruction error improvement achievable with the hybrid basis for a diverse set of natural images. We find that the hybrid basis can reduce the reconstruction error up to 77% or equivalently requires fewer measurements to achieve a desired reconstruction error compared to the purely random basis. It is also robust to varying levels of object sparsity and yields as much as 40% lower reconstruction error compared to the random basis in the presence of measurement noise.
RESUMEN
We use an information-theoretic method developed by Neifeld and Lee [J. Opt. Soc. Am. A 25, C31 (2008)] to analyze the performance of a slow-light system. Slow-light is realized in this system via stimulated Brillouin scattering in a 2 km-long, room-temperature, highly nonlinear fiber pumped by a laser whose spectrum is tailored and broadened to 5 GHz. We compute the information throughput (IT), which quantifies the fraction of information transferred from the source to the receiver and the information delay (ID), which quantifies the delay of a data stream at which the information transfer is largest, for a range of experimental parameters. We also measure the eye-opening (EO) and signal-to-noise ratio (SNR) of the transmitted data stream and find that they scale in a similar fashion to the information-theoretic method. Our experimental findings are compared to a model of the slow-light system that accounts for all pertinent noise sources in the system as well as data-pulse distortion due to the filtering effect of the SBS process. The agreement between our observations and the predictions of our model is very good. Furthermore, we compare measurements of the IT for an optimal flattop gain profile and for a Gaussian-shaped gain profile. For a given pump-beam power, we find that the optimal profile gives a 36% larger ID and somewhat higher IT compared to the Gaussian profile. Specifically, the optimal (Gaussian) profile produces a fractional slow-light ID of 0.94 (0.69) and an IT of 0.86 (0.86) at a pump-beam power of 450 mW and a data rate of 2.5 Gbps. Thus, the optimal profile better utilizes the available pump-beam power, which is often a valuable resource in a system design.
RESUMEN
Traditional approaches to wide field of view (FoV) imager design usually lead to overly complex optics with high optical mass and/or pan-tilt mechanisms that incur significant mechanical/weight penalties, which limit their applications, especially on mobile platforms such as unmanned aerial vehicles (UAVs).We describe a compact wide FoV imager design based on superposition imaging that employs thin film shutters and multiple beamsplitters to reduce system weight and eliminate mechanical pointing. The performance of the superposition wide FoV imager is quantified using a simulation study and is experimentally demonstrated. Here, a threefold increase in the FoV relative to the narrow FoV imaging optics employed imager design is realized. The performance of a superposition wide FoV imager is analyzed relative to a traditional wide FoV imager and we find that it can offer comparable performance.
RESUMEN
One consequence of the special theory of relativity is that no signal can cause an effect outside the source light cone, the space-time surface on which light rays emanate from the source. Violation of this principle of relativistic causality leads to paradoxes, such as that of an effect preceding its cause. Recent experiments on optical pulse propagation in so-called 'fast-light' media--which are characterized by a wave group velocity upsilon(g) exceeding the vacuum speed of light c or taking on negative values--have led to renewed debate about the definition of the information velocity upsilon(i). One view is that upsilon(i) = upsilon(g) (ref. 4), which would violate causality, while another is that upsilon(i) = c in all situations, which would preserve causality. Here we find that the time to detect information propagating through a fast-light medium is slightly longer than the time required to detect the same information travelling through a vacuum, even though upsilon(g) in the medium vastly exceeds c. Our observations are therefore consistent with relativistic causality and help to resolve the controversies surrounding superluminal pulse propagation.
RESUMEN
Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.