RESUMO
Although considered as promising candidates for lithium-ion secondary batteries, spinel LiMn2O4 cathodes suffer from significant capacity decay owing to the Jahn-Teller effect, dissolution of Mn and lattice oxygen loss during the charge/discharge process, preventing their wider use. In this work, we realize that F-doping at small concentrations could improve the battery voltage and reduce the capacity decay using an atomistic model. For voltage, F-doping improves the voltage to about 4.4 eV under large delithiation. For capacity decay, it retards capacity decay owing to the reduced lattice oxygen loss. The larger Gibbs free energy of oxygen release after F-doping indicates harder lattice oxygen loss. In addition, although F-doping makes the average valence of Mn lower, the existence of Mn4+ during delithiation exerts a positive effect by reducing the Jahn-Teller effect. However, since the Mn3+ ions in the spinel structure could induce Jahn-Teller distortion, the effect of F-doping on Jahn-Teller distortion is determined by the competition between Mn4+ and Mn3+. The atomistic mechanism of F-doping in the performance of LiMn2O4 offers new insight in developing spinel lithium manganese oxide cathode materials with superior performance.
RESUMO
Nowadays, binocular stereo vision (BSV) is extensively used in real-time 3D reconstruction, which requires cameras to quickly implement self-calibration. At present, the camera parameters are typically estimated through iterative optimization. The calibration accuracy is high, but the process is time consuming. Hence, a system of BSV with rotating and non-zooming cameras is established in this study, in which the cameras can rotate horizontally and vertically. The cameras' intrinsic parameters and initial position are estimated in advance by using Zhang's calibration method. Only the yaw rotation angle in the horizontal direction and pitch in the vertical direction for each camera should be obtained during rotation. Therefore, we present a novel self-calibration method by using a single feature point and transform the imaging model of the pitch and yaw into a quadratic equation of the tangent value of the pitch. The closed-form solutions of the pitch and yaw can be obtained with known approximate values, which avoid the iterative convergence problem. Computer simulation and physical experiments prove the feasibility of the proposed method. Additionally, we compare the proposed method with Zhang's method. Our experimental data indicate that the averages of the absolute errors of the Euler angles and translation vectors relative to the reference values are less than 0.21° and 6.6 mm, respectively, and the averages of the relative errors of 3D reconstruction coordinates do not exceed 4.2%.
RESUMO
This study is intended for modeling and calibration of a precise optical positioning system for tracking 3D positions of remote targets in a large space. This system is made up of four linear cameras, which are equipped with cylindrical lenses. The four cameras are paired up as two identical groups. Each camera group is composed of two linear cameras that are packaged together with their imaging orientations normal to each other. The specially designed structure makes the system superior to existing three-linear-CCD-camera systems used for position tracking, in the efficiency of eliminating distortion of cylindrical lenses, a long-standing problem in precise calibration of linear cameras with cylindrical lenses. During the modeling and calibration process, each camera group is treated as an integrated 2D image sensor. A complete imaging model is established for each camera group, and the object-space error is used in calibration for obtaining optimal camera parameters. Simulative and real experiments have verified that, when the two cameras in each group have a good distortion consistency, the proposed calibration approach can effectively fit the model of linear cameras and correct the distortion of cylindrical lenses, thus leading to a significant improvement of positioning accuracy.
RESUMO
In this paper, a novel 3D gaze estimation method for a wearable gaze tracking device is proposed. This method is based on the pupillary accommodation reflex of human vision. Firstly, a 3D gaze measurement model is built. By uniting the line-of-sight convergence point and the size of the pupil, this model can be used to measure the 3D Point-of-Regard in free space. Secondly, a gaze tracking device is described. By using four cameras and semi-transparent mirrors, the gaze tracking device can accurately extract the spatial coordinates of the pupil and eye corner of the human eye from images. Thirdly, a simple calibration process of the measuring system is proposed. This method can be sketched as follows: (1) each eye is imaged by a pair of binocular stereo cameras, and the setting of semi-transparent mirrors can support a better field of view; (2) the spatial coordinates of the pupil center and the inner corner of the eye in the images of the stereo cameras are extracted, and the pupil size is calculated with the features of the gaze estimation method; (3) the pupil size and the line-of-sight convergence point when watching the calibration target at different distances are computed, and the parameters of the gaze estimation model are determined. Fourthly, an algorithm for searching the line-of-sight convergence point is proposed, and the 3D Point-of-Regard is estimated by using the obtained line-of-sight measurement model. Three groups of experiments were conducted to prove the effectiveness of the proposed method. This approach enables people to obtain the spatial coordinates of the Point-of-Regard in free space, which has great potential in the application of wearable devices.