參考文獻 |
[1] SIW Editorial Staff, Integrator Unveils Security System for Wynn Casino. [Online]. Available: http://www.securityinfowatch.com/article/article.jsp?siteSection=344&id=9566
[2] T. Datz, What Happens in Vegas Stays on Tape. [Online]. Available: http://www.csoonline.com/read/090105/hiddencamera_vegas_3834.html
[3] Automated analysis of nursing home observations, Pervasive Comput., vol. 3, no. 2, pp. 15–21, 2004.
[4] Philips Lifeline. [Online]. Available: http://www.lifelinesys.com/
[5] CNN, Smart Cameras Spot Shady Behavior. [Online]. Available: http://edition.cnn.com/2007/TECH/science/03/26/fs.behaviorcameras/index.html
[6] M. Valera and S. Velastin, “Intelligent distributed surveillance systems: A review,” in IEEE Proc. Vis. Image, Signal Process., vol. 152, no. 2, pp. 192–204, Apr. 2005.
[7] R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process., vol. 14, no. 3, pp. 294–307, Mar. 2005.
[8] A. Yilmaz, O. Javed and M. Shah, “Object Tracking: A Survey,” ACM Journal of Computing Surveys, vol. 38, No. 4, 2006.
[9] G. L. Foresti, “Object Recognition and tracking for remote video surveillance,” IEEE Trans. Circuits Syst. Video Technol., vol. 9, no. 7, Oct. 1999.
[10] D.-Y. Chen, K. Cannons, H.-R. Tyan, S.-W. Shih, and H.-Y. M. Liao, “Video-based human movement analysis and its application to surveillance systemss,” IEEE Trans. Multimedia, vol. 10, no. 3, pp. 372–384, Apr. 2008.
[11] W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 34, no. 3, pp. 334–352, Aug. 2004.
[12] M. M. Trivedi, T. L. Gandhi, and K. S. Huang, “Distributed interactive video arrays for event capture and enhanced situational awareness,” IEEE Intell. Syst., Oct. 2005.
[13] M. Quaritsch, M. Kreuzthaler, B. Rinner, H. Bischof, and B. Strobl, “Autonomous multicamera tracking on embedded smart cameras,” EURASIP J. Embed. Syst., 2007.
[14] S. Fleck and W. Straber, “Smart camera based monitoring system and its application to assisted living,” in Proc. IEEE, vol. 96, no. 10, pp. 1698–1714, Oct. 2008.
[15] D. A. Migliore, M. Matteucci, and M. Naccari, “View-based detection and analysis of periodic motion,” in Proc. 4th ACMInt.Workshop VideoSurveill. Sensor Netw., Oct. 2006, pp. 215–218.
[16] S.-Y. Chien, S.-Y. Ma, and L.-G. Chen, “Efficient moving object segmentation algorithm using background registration technique,” IEEE Trans. Circuits Syst. Video Technol., vol. 12, no. 7, pp. 577–586, Dec. 2002.
[17] W.-K. Chan and S.-Y. Chien, “Real-time memory-efficient video object segmentation in dynamic background with multi-background registration technique,” in Proc. IEEE Multimedia Signal Process. Workshop, Crete, Greece, Oct. 2007, pp. 219–222.
[18] P. Rosin and E. Ioannidis, “Evaluation of global image thresholding for change detection,” Pattern Recognit. Lett., vol. 24, no. 14, pp. 2345–2356, Oct. 2003.
[19] L. di Stefano, S. Mattoccia, and M. Mola, “Achange-detection algorithm based on structure and color,” in Proc. IEEE Conf. Advanced Video and Signal-Based Surveillance, 2003, pp. 252–259.
[20] T. Aach and A. Kaup, “Bayesian algorithms for change detection in image sequences using Karkov random fields,” Sig. Proc: Image Comm., 7(2): 56–61, 1995.
[21] L. Bruzzone and D. F. Prieto, “Automatic analysis of the difference image for unsupervised change detection,” IEEE Trans. Geosci. Remote Sens., vol. 38, no. 3, pp. 1171–1182, May 2000.
[22] L.Wixson, “Detecting salient motion by accumulating directionary-consistent flow,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 774–780, Aug. 2000.
[23] K. E. Matthews and N. M. Namazi, “A Bayes Decision decision test for detecting uncovered-background and moving pixles in image sequences,” IEEE Trans. Image Processing, vol. 7, no. 5, pp. 720–728, May. 1998.
[24] S. C. Liu, C.W. Fu, and S. Chang, “Statistical change detection with moments under time-varying illumination,” IEEE Trans. Image Processing, vol. 7, pp. 1258–1268, Sept. 1998.
[25] L. Li and M. Leung, “Integrating intensity and texture differences for robust change detection,” IEEE Trans. Image Processing, vol. 11, pp. 105–112, Feb. 2002.
[26] M. Piccardi, “Background subtraction techniques: a review,” in Proc. IEEE Int. Conf. Systems, Man, Cybernetics, 2004, pp. 3099–3104.
[27] Y. Benezeth et al., “Review and evaluation of commonly-implemented background subtraction algorithm,” in Proc. Int. Conf. Pattern Recognition, 2008, pp. 1–4.
[28] Jacinto C. N. and Jorge S. M., “Performance evaluation of object detection algorithms for video surveillance,” IEEE Trans. Multimedia, vol. 10, no. 3, pp. 372–384, Apr. 2008.
[29] Y. Benezeth, et al., “Comparative study of background subtraction algorithms,” Journal of Electronic Imaging., vol. 19, no. 3, 2010.
[30] N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” in Proc. of 13th Conf. on Uncertainty in Artificial Intelligence, Aug.1-3, 1997.
[31] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1999, pp. 246–252.
[32] Z. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,” Proc. Int’l Conf. Pattern Recognition., vol. 2, 2004, pp. 28-31.
[33] Q. Zang and R. Klette, “Robust background subtraction and maintenance,” Proc. Int’l Conf. Pattern Recognition, vol. 2, pp. 90-93, 2004.
[34] D.-S. Lee, “Effective Gaussian mixture learning for video background subtraction,” IEEE Trans. Pattern Anal. Machine Intell., vol.27, no.5, pp. 827-832, May 2005.
[35] O. Javed, K. Shafique, and M. Shah, “A hierarchical approach to robust background subtraction using color and gradient information,” in Proc. IEEE Workshop Motion Video Computing, Dec. 2002, pp. 22–27.
[36] P.-M. Jodoin, M. Mignotte, and J. Konrad, “Statistical Background subtraction using spatial cues,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 12, pp. 1758–1763, Dec. 2007.
[37] H.-L. Eng, J. Wang, Alvin H. K. S. Wah, and W.-Y. Yau, “Robust human detection within a highly dynamic aquatic environment in real time,” IEEE Trans. Image Process., vol. 15, no. 6, pp. 1583–1600, Jun. 2006.
[38] C. Benedek and T. Szirányi, “Bayesian foreground and shadow detection in uncertain frame rate surveillance videos,” IEEE Trans. Image Processing, vol. 17, no. 4, pp. 608–621, Apr. 2008.
[39] T.-H. Tsai, W.-T. Sheu and C.-Y. Lin, "Foreground Object Detection based on Multi-model Background Maintenance,” IEEE International Symposium on Multimedia, Taiwan, 2007.
[40] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: Principlesand practice of background maintenance,” in Proc. IEEE Int. Conf. Computer Vision, Sept. 1999, pp. 255–261.
[41] M. Heikkila and M. Pietikainen, “A texture-based method for modeling the background and detection moving object,” IEEE Trans. Pattern Anal. Machine Intell., vol. 28, no. 4, pp. 657–662, Apr 2006.
[42] S. Zhang, H. Yao, and S. Liu, “Dynamic background modeling and subtraction using spatio-temporal local binary patterns,” in Proc. of IEEE Int. Conf. Image Process., Sep 2008.
[43] B. Li, B. Yuan, and Z. Miao, “Moving object detection in dynamic scenes using nonparametric local kernel histogram estimation,” in Proc. IEEE Int. Conf. Multimedia and Expo, pp. 1461–1464, Jun. 2008.
[44] L. Li, W. Huang, I. Y.-H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” IEEE Trans. Image Process., vol. 13, no. 11, pp. 1459–1472, Nov. 2004.
[45] A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proc. IEEE, vol. 90, no. 7, pp. 1151–1162, Jul. 2002.
[46] A. Mittal and N. Paragios, “Motion-Based Background Subtraction Using Adaptive Kernel Density Estimation,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2004.
[47] Y. Sheikh and M. Shah, “Bayesian object detection in dynamic scenes,” IEEE Trans. Pattern Anal. Machine Intell., vol.27, no.11, pp. 1778-1792, Nov. 2005.
[48] S. Liao, G. Zhao, V. Kellokumpu, M. Pietikainen, and S. Z. Li, “Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2010, pp. 1301–1306.
[49] L. Maddalena and A. Petrosino, “A self-organizing approach to background subtraction for visual surveillance applications,” IEEE Trans. Image Process., vol. 17, no. 7, pp. 1168–1177, July. 2008.
[50] T. Morimoto, et al,“An FPGA-Based Region-Growing Video Segmentation System with Boundary-Scan-Only LSI Architecture,” in Conf. IEEE Asia Pacific Circuits and Systems, APCCAS., pp. 944 – 947, Dec. 2006.
[51] J. Kim and T. Chen, “A VLSI architecture for video-object segmentation,” IEEE Trans. Circuits Syst. Video Technol., vol. 13, no. 1, pp. 83–96, Jan. 2003.
[52] S.-Y. Chien, et al, “Single chip video segmentation system with a programmable PE array,” IEEE Asia-Pacific Conference., pp. 233-236, Aug. 2002.
[53] W.-K. Chan, et al, “Efficient content analysis engine for visual surveillance network,” IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 5, pp. 693–703, May. 2009.
[54] D.-Z. Peng, C.-Y. Lin, W.-T. Sheu, and T.-H. Tsai, “A low cost and low complexity foreground object segmentation architecture design with multi-model background maintenance algorithm, ” in Proc. IEEE Int. Conf. Image Process., Cairo Egypt, 2009.
[55] H. Jiang, H. Ardo, and V. Owall, “A hardware architecture for real-time video segmentation utilizing memory reduction techniques,” IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 2, pp. 226–236, Feb. 2009.
[56] M. Valera and S. Velastin, “Intelligent distributed surveillance systems: A review,” in IEEE Proc. Vis. Image, Signal Process., vol. 152, no. 2, pp. 192–204, Apr. 2005.
[57] Y. Charfi, N. Wakamiya, and M. Murata,”Challenging issues in visual sensor networks,” IEEE Wireless Communications. Mag., vol. 16, no. 2, pp. 44–49, Apr. 2009.
[58] R. M. Neal and G. E. Hinton, “A view of the EM algorithm that justifies incremental, sparse, and other variants,” Learning in graphical models, MIT Press, Cambridge, MA, 1999.
[59] M.A. Sato and S. Ishii, “Online EM algorithm for the normalized Gaussian network,” Neural Computation, vol. 12, pp. 407-432, 1999.
[60] P. Salembier and M. Pardas, “Hierarchical morphological segmentation for image sequence coding,” IEEE Trans. Image Process., vol. 3, no. 5, pp. 639–651, Sep. 1994.
[61] TMS320C64x/C64x+ DSP CPU and Instruction Set Reference Guide, SPRU732GE, Feb. 2008, http://www.ti.com
[62] TMS320C64x/C64x+ DSP Image/Video Processing Library Programmer’s Guide, SPRUF30A, Oct. 2007, http://www.ti.com
[63] TMS320C64x DSP Two-Level Internal Memory Reference Guide, SPRU610B, Aug. 2004, http://www.ti.com
[64] TMS320C6000 DSP 32-Bit Timer Reference Guide, SPRU582B, Jan. 2005, http://www.ti.com
[65] C. Dumontier, F. Luthon, and J.-P. Charras, “Real-time DSP implementation for MRF-based video motion detection,” IEEE Trans. Image Process., vol. 8, no. 10, pp. 1341–1347, Oct. 1999.
[66] S. P. Ierodiaconou, N. Dahnoun, and L. O. Xu, “Implementation and optimisation of a video object segmentation algorithm on an embedded DSP platform,” in Institution of Engineering and technology conf. Crime and Security., 2006.
[67] C.-Y. Lin, S.-Y. Li, and T.-H. Tsai, “A scalable parallel hardware architecture for connected component labeling, ” in Proc. IEEE Int. Conf. Image Process., Hong Kong, 2010.
[68] J. Detrey and F. de Dinechin,“A parameterized floating-point exponential function for FPGAs,” In IEEE International Conference on Field-Programmable Technology (FPT’05). IEEE Computer Society Press, Dec. 2005.
[69] J. Detrey F. de Dinechin X. Pujol “Return of the hardware floating-point elementary function,”18th IEEE Symposium on Computer Arithmetic (ARTH’07), pp. 161-168, June 2007.
[70] OpenRISC 1200 IP Core Specification, http://opencores.org/svnget,or1k?file=/trunk/ or1200/doc/openrisc1200spec.pdf
[71] S. Chaudhuri and D. Taur, “High-resolution slow-motion sequencing: How to generate a slow-motion sequence from a bit stream,” IEEE Signal Process. Mag., vol. 22, no. 2, pp. 16–24, Feb. 2005.
[72] E. Parzen, “On Estimation of a Probability Density and Mode,” Annals of Math. Statistics, 1962.
[73] H. Kruegle, CCTV surveillance video practice and technology 2nd edition, Elsivier Butterworth-Heinemann, 2007.
[74] G. Gualdi, A. Prati, and R. Cucchiara, “Video streaming for mobile video surveillance, ” IEEE Trans. Multimedia, vol. 10, no. 6, pp. 1142–1154, Oct. 2008.
[75] Z. Kato, J. Zerubia, and M. Berthod, “Satellite image classification using a modified metropolis dynamics,” in Proc. Int. Conf. Acoustics, Speech and Signal Processing, Mar. 1992, pp. 573–576.
[76] T. Aach, A. Kaup, R. Mester, “Change detection in image sequences using Gibbs random fields, ” Proc. Intl. Works. Intell. Sig. Proc. Comm. Syst., Sendai, Japan, Oct. 1993, pp. 56–61.
[77] Y. Chen and G. Leedham, “decompose algorithm for thresholding degraded historical document images, ” IEE Proc. vision, image and signal processing, vol. 152, pp: 702-714, 2005.
[78] Coding of Audiovisual Objects-Part 2: Visual, ISO/IEC ISO/IEC 14496-2 (MPEG-4), 2001.
[79] M.-H. Hsiao, et al., “Object-based video streaming technique with application to intelligent transportation systems,” in Proc. IEEE Int. Conf. Networking, Sensing and Control, pp. 315–318, Mar. 2004.
[80] J. Meessen, C. Parisot, X. Desurmont, and J.-F. Delaigle, “Scene Analysis for Reducing Motion JPEG 2000 Video Surveillance Delivery Bandwidth and Complexity,” in Proc. of IEEE Int. Conf. Image Process., Genova, Italy, September 2005.
[81] W. K.-H. Ho, W.-K. Cheuk, and D. P.-K. Lun, “Content-based scalable H.263 video coding for road traffic monitoring,” IEEE Trans. Multimedia, vol. 7, no. 4, pp. 615–623, Aug. 2005.
[82] A. Vetro, T. Haga, K. Sumi, and H. Sun, “Object-based coding for long-term archive of surveillance video,” in Proc. IEEE Int. Conf. Multimedia and Expo, vol. 2, pp. 417–420, July. 2003.
[83] F. Moschetti, G. Covitto, F. Ziliani, and A. Mecocci, “Automatic object extraction and dynamic bitrate allocation for second generation video coding,” in Proc. IEEE Int. Conf. Multimedia and Expo, vol. 1, pp. 493–496, July. 2002.
[84] Y. Yu and D. Doermann, “Model of object-based coding for surveillance video,” in IEEE Int. Conf. Acoust., Speech, Signal Processing, pp. 693–696, Mar. 2005.
[85] R. V. Babu and A. Makur, “Object-based surveillance video compression using foreground motion compensation,” in Proc. IEEE Int. Conf. Control, Automation, Robotics and Vision, pp. 1–6, Dec. 2006.
[86] A. Cavallaro, O. Steiger, and T. Ebrahimi, “Semantic video analysis for adaptive content delivery and automatic description,” IEEE Trans. Circuits Syst. Video Technol., vol. 15, pp. 1200–1209, Oct. 2005.
[87] T. Sikora and B. Makai, “Shape-adaptive DCT for generic coding of video,” IEEE Trans. Circuits Syst. Video Technol., vol. 5, pp. 59–62, Feb. 1995.
[88] L. P. Kondi, G. Melnikov, and A. K. Katsaggelos, “Joint optimal object shape estimation and encoding,” IEEE Trans. Circuits Syst. Video Technol., vol. 14, pp. 528–533, Apr. 2004.
[89] K. J. Kim, C. W. Lim, M. G. Kang, and K. T. Park, “Adaptive approximation bounds for vertex based contour encoding,” IEEE Trans. Image Processing, vol. 8, pp. 1142–1147, Aug. 1999.
[90] J. Serra and L. Vincent, “An overview of morphological filtering,” IEEE Trans. Circuits, Systems and Signal Proc., vol. 11, no. 1, pp. 47–108, Apr. 1993.
[91] Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification, ITU-T Rec. H.264 and ISO/IEC 14 496-10 AVC, Joint Video Team, Mar. 2003.
[92] G. Bjontegaard, “Calculation of average PSNR differences between RD-curves,” ITU-T Q6/SG16, Doc. VCEG-M33, Apr. 2001.
|