RESUMEN
Automatic vehicle detection and counting are considered vital in improving traffic control and management. This work presents an effective algorithm for vehicle detection and counting in complex traffic scenes by combining both convolution neural network (CNN) and the optical flow feature tracking-based methods. In this algorithm, both the detection and tracking procedures have been linked together to get robust feature points that are updated regularly every fixed number of frames. The proposed algorithm detects moving vehicles based on a background subtraction method using CNN. Then, the vehicle's robust features are refined and clustered by motion feature points analysis using a combined technique between KLT tracker and K-means clustering. Finally, an efficient strategy is presented using the detected and tracked points information to assign each vehicle label with its corresponding one in the vehicle's trajectories and truly counted it. The proposed method is evaluated on videos representing challenging environments, and the experimental results showed an average detection and counting precision of 96.3% and 96.8%, respectively, which outperforms other existing approaches.
RESUMEN
Reconstruction-based change detection methods are robust for camera motion. The methods learn reconstruction of input images based on background images. Foreground regions are detected based on the magnitude of the difference between an input image and a reconstructed input image. For learning, only background images are used. Therefore, foreground regions have larger differences than background regions. Traditional reconstruction-based methods have two problems. One is over-reconstruction of foreground regions. The other is that decision of change detection depends on magnitudes of differences only. It is difficult to distinguish magnitudes of differences in foreground regions when the foreground regions are completely reconstructed in patch images. We propose the framework of a reconstruction-based change detection method for a free-moving camera using patch images. To avoid over-reconstruction of foreground regions, our method reconstructs a masked central region in a patch image from a region surrounding the central region. Differences in foreground regions are enhanced because foreground regions in patch images are removed by the masking procedure. Change detection is learned from a patch image and a reconstructed image automatically. The decision procedure directly uses patch images rather than the differences between patch images. Our method achieves better accuracy compared to traditional reconstruction-based methods without masking patch images.