摘要(英) |
Recently, video surveillance and monitoring (VSAM) has gradually become a popular research topic due to its importance. In such systems, the main tasks to be performed can be summarized as follows; image sequences are processed, moving targets are detected and tracked, video data are stored and queried, and alarms are made when illegal events occur. Many topics focus on the study of image processing, pattern recognition, and artificial intelligence.
In this thesis, an immersed environment is developed by combining virtual reality and video monitoring methodologies. The image sequences are grabbed to construct the panoramic images from a programmable controlled PTZ camera. These images are stitched by using the moisack techniques. In addition, moving targets are detected, tracked and segmented from the video sequences. Moreover, the PTZ camera is triggered by the predicted data in order to continuously capture the images with moving targets when objects are out of the field of view (FOV). Two experimental environments, an indoor scene and an outdoor scene, are constructed to demonstrate the validity and effectiveness of our proposed approach. |
參考文獻 |
[1] B. Hu, C. Brown, and A. Choi, “Acquiring an environment map through image mosaicking,”tech. rep., Computer Science Department, University of Rochester, 2001.
[2] S. Coorg and S. Teller, “Spherical mosaic with quaternions and dense correlation,” International Journal of Computer Vision, vol. 37,pp. 259–273, 2000.
[3] E. Noirfalise, J. Lapreste, F. Jurie, and M. Dhome, “Real-time registration for image mosaicing,” in Proc. of British Machine Vision Conference, 2002.
[4] F. Jurie and M. Dhome, “Real time template matching,” in Proc. IEEE International Conference on Computer Vision, 2001.
[5] G. L. Foresti and C. Micheloni, “Real-time video-surveillance by an active camera,” in Proc. of Workshop sulla Percezione e Visione nelle Macchine, Universita di Siena, 2002.
[6] R. Gupta, M. D.Theys, , and H. J. Siegel, “Background compensation and active-camera motion tracking algorithm,” in Proc. of International Conference on Parallel Processing, pp. 11–15,1997.
[7] N. Paragios and R. Deriche, “Geodesic active contours and level sets for the detection and tracking of moving objects,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, pp. 266–280, 2000.
[8] L. Vincent and P. Soille, “Watersheds in digital spaces: An efficient algorithm based on immersion simulations,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 13, pp. 583–598, 1991.
[9] S. Beucher, “The watershed transformation applied to image segmentation,”in Proc. Of Pfefferkorn Conference on Signal and Image Processing in Microscopy and Microanalysis,pp. 299–314, 1991.
[10] P. D. Smet, R. Luis, and V. P. M. Pires,“Implementation and analysis of an optimized rainfalling watershed algorithm,” in Proc. of Science and Technology Conference: Image and Video Communications and Processing, 2000.
[11] S. E. Hernandez and K. E. Barner, “Joint region merging criteria for watershed-based image segmentation,”in IEEE International Conference on Image Processing, pp. 108–111, 2000.
[12] T. Geraud, P. Y. Strub, and J. Darbon, “Color image segmentation based on automatic morphological clustering,” in Proc. of IEEE International Conference on Image Processing, pp. 70–73, 2001.
[13] R. Hartley, “In defense of the eight-point algorithm,”IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, pp. 580–593, 1997.
[14] E. W. Weisstein, “Mercator projection.”From MathWorld–A Wolfram Web Resource, http://mathworld.wolfram.com/Mercator Projection.html, 2004.
[15] N. S.Wu, “High altitude flying objects detection and tracking,” Master’s thesis, National Central University, 2003. (in Chinese). |