參考文獻 |
[1]Agarwala, A., M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage, ” in Proc. ACM SIGGRAPH, Los Angeles, California, Aug. 2004, pp.294-302.
[2]Bertozzi, M. and A. Broggi, “GOLD: a parallel real-time stereo vision system for genericobstacle and lane detection,” IEEE Trans. on Image Processing, vol.7, no.1, pp.62-81, Jan. 1998.
[3]Bertozzi, M., A. Broggi, P. Medici, P. P. Porta, and A. Sjogren, “Stereo vision-based start-inhibit for heavy goods vehicles,” in Proc. of IEEE Intelligent Vehicles Symposium, Tokyo, Japan, Jun.13-15, 2006, pp.350-355.
[4]Brown, M. and D. G. Lowe, “Recognising Panoramas,” in Proc. 9th IEEE International Conf. on Computer Vision, Nice, France, Oct.13-16, 2003, pp.1218-1225.
[5]Burt, P. J. and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Trans. on Graphics, vol.2, pp.217-236, Oct. 1983.
[6]Davis, J. “Mosaics of scenes with moving objects,” in Proc. IEEE Computer Vision and Pattern Recognition, Santa Barbara, CA, Jun.23-25, 1998, pp.354-360.
[7]Devernay, F. and O. Faugeras, “Straight lines have to be straight,” Machine Vision and Application, vol.13, no.1, pp.14-24, 2001.
[8]Ehlgen, T. and T. Pajdla, “Monitoring surrounding areas of truck-trailer combinations,” in Proc. of 5th Int. Conf. on Computer Vision Systems, Bielefeld, Germany, Mar.21-24, 2007, CD-ROM.
[9]Ehlgen, T., M. Thorn, and M. Glaser, “Omnidirectional cameras as backing-up aid,” in Proc. of IEEE Int. Conf. on Computer Vision., Rio de Janeiro, Brazil, Oct.14-21, 2007, pp.1-5.
[10]Faugeras, O., T. Luong, and S. Maybank. “Camera self-calibration: theory and experiments,” in Proc 2nd ECCV of Lecture Notes in Computer Science, vol. 588 pp.321–334, 1992.
[11]Fischler, M. A. and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of The ACM, vol.24, issue.6, pp.381-395, Jun. 1981.
[12]Fraser, C. S., “Digital camera self-calibration,” Journal of Photogrammetric & Remote Sensing, vol.52, pp.149-159, 1997.
[13]Gandhi, T. and M. M. Trivedi, “Dynamic panoramic surround map: motivation and omni video based approach,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Washington DC, Jun.20-26, 2005, pp.61-69.
[14]Gandhi, T. and M. M. Trivedi, “Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera,” Machine Vision and Applications, vol.16, no.2, pp.85-95, 2005.
[15]Harris, C. and M. Stephens, “A combined corner and edge detector,” in Proc. of The Fourth Alvey Vision Conference, Manchester, Aug.31-Sep.2, 1988, pp.147-151.
[16]Jung, H. G., D. S. Kim, P. J. Yoon, and J. Kim, "Parking slot markings recognition for automatic parking assist system," in Proc. IEEE Intelligent Vehicle Symp., Tokyo, Japan, Jun.13-15, 2006, pp.106-113.
[17]Kang, S. B. and R. Weiss, “Can we calibrate a camera using an image of a flat, textureless lambertian surface?,” in Proc. European Conference on Computer Vision, Dublin, Ireland, July. 2000, pp.640–653.
[18]Levin, A., A. Zomet, S. Peleg, and Y. Weiss, “Seamless Image Stitching in the Gradient Domain,” in Proc. 8th European Conf. on Computer Vision, Prague, Czech Republic, May.11-14, 2004, pp.377-389.
[19]Liu, Y. C., K. Y. Lin, and Y. S. Chen, “Bird’s-eye view vision system for vehicle surrounding monitoring,” in Proc. Conf. Robot Vision, Berlin, Germany, Feb. 20-22, 2008, pp.207-218.
[20]Lowe, D. G., "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol.60, issue.2, pp.91-110, Nov. 2004.
[21]Marquardt, D. “An algorithm for least-squares estimation of nonlinear parameters,” SIAM Journal on Applied Mathematics, vol.11, pp.431–441, 1963.
[22]Mikolajczyk, K. and C. Schmid, “A Performance Evaluation of Local Descriptors,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.27, no.10, pp1615-1630, Oct. 2005.
[23]Moravec, H. P., “Towards automatic visual obstacle avoidance,” in Proc. 5th International Joint Conference on Artificial Intelligence, Tokyo, 1977, pp.584.
[24]Reinhard, E., M. Adhikhmin, B. Gooch, and P. Shirley, “Color transfer between images,” IEEE Computer Graphics and Applications, vol.21, no.5, pp.34-41, 2001.
[25]Ruderman, D. L., T. W. Cronin, and C. C. Chiao, “Statistics of cone responses to natural images: implications for visual coding,” Journal of the Optical Society of America A, vol.15, no.8, pp.2036-2045, 1998.
[26]Szeliski, R. and H.-Y. Shum, “Creating full view panoramic image mosaics and environment maps,” in Proc. ACM SIGGRAPH, Los Angeles, CA, Sep. 1997, pp.251-258.
[27]Tsai, R. Y., “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf tv cameras and lenses,” IEEE Journal of Robotics and Automation, vol.3, no.4, pp.323-344, 1987.
[28]Uyttendaele, M., A. Eden, and R. Szeliski, “Eliminating ghosting and exposure artifacts in image mosaics,” in Proc. IEEE Computer Vision and Pattern Recognition, Kauai Marriott, Hawaii, Dec.9-14, 2001, pp.509-516.
[29]Weng, J., P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.14, no.10, pp.965-980, 1334, 1992.
[30]Zhang, Z., “A flexible new technique for camera calibration,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.22, no.11, pp.1330-1334, 2000.
|