Non-contact Gap and Flush Measurement Using Monocular Structured Multi-line Light Vision for Vehicle Assembly
Tóm tắt
The accurate fitting of various parts inspected by measuring the width of the gap between two adjacent panels and the alignment of the two surfaces, known as flushness, is an important task in assembling vehicles. The optimal solution requires high accuracy and fast measurement. Toward this end, we develop a vision-based non-contact gap and flush measurement. The vision system consists of a high-resolution camera and a multi-line laser generator. The proposed gap and flush measurement sensor projects laser lines onto the panels that are observed by the high-resolution camera. The measurement is initiated when the operator brings the device closer to the surface until it is within operating range. During the process, the line features are digitized by using proposed approach, the desired calculations are made, the non-conforming images are discarded, and the remaining images are used to perform the gap and flush measurement. The measurement system can deal with complex surface in noisy industrial environment and achieve higher specifications compared with current gap and flush measurement sensors. The usefulness of the proposed system has been demonstrated using real tests with accurate know-size patterns and a real inline vehicle assembly system in Korea.
Tài liệu tham khảo
B. Culshaw, G. Pierce, and J. Pan, “Non–contact measurement of the mechanical properties of materials using an alloptical technique,” IEEE Sensors Journal, vol. 3, no. (1), pp. 62–70, 2003.
M.-T. Ha, H.-Y. Kim, and C.-G. Kang, “A precision stopping measurement device to automatically detect position errors of an urban train at railway stations,” International Journal of Control, Automation and Systems, vol. 15, no. (2), pp. 848–856, 2017.
D. A. Kiefer, M. Fink, and S. J. Rupitsch, “Simultaneous ultrasonic measurement of thickness and speed of sound in elastic plates using coded excitation signals,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 64, no. (11), pp. 1744–1757, 2017.
A. Carullo and M. Parvis, “An ultrasonic sensor for distance measurement in automotive applications,” IEEE Sensors Journal, vol. 1, no. (2), pp. 143, 2001.
Z. S. Lim, S. T. Kwon, and M. G. Joo, “Multi–object identification for mobile robot using ultrasonic sensors,” International Journal of Control, Automation and Systems, vol. 10, no. (3), pp. 589–593, 2012.
J. G. D. M. Franca, M. A. Gazziro, and A. N. Ide, “A 3D scanning system based on laser triangulation and variable field of view,” Proc. ICIP, 2005.
V.-D. Hoang and K.-H. Jo, “Automatic calibration of camera and LRF based on morphological pattern and optimal angular back–projection error,” International Journal of Control, Automation and Systems, vol. 13, no. (6), pp. 1436–1445, 2015.
T.-T. Tran and C. Ha, “Slippage estimation using sensor fusion,” Intelligent Computing Theories and Application. ICIC 2016. Lecture Notes in Computer Science, vol. 9772, pp. 471–481, Springer, Cham.
T. T. Trang and C. Ha, “Irregular Moving Object Detecting and Tracking Based on Color and Shape in Real–time System,” IEEE International Conference Computing, Management and Telecommunication (ComManTel), pp. 415–419, 2013.
R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2003.
M.-H. Le, H.-H. Trinh, V.-D. Hoang, and K.-H. Jo, “Automated architectural reconstruction using reference planes under convex optimization,” International Journal of Control, Automation and Systems, vol. 14, no. (3), pp. 814–826, 2016.
Y. S. Suh, N. H. Q. Phuong, and H. J. Kang, “Distance estimation using inertial sensor and vision,” International Journal of Control, Automation and Systems, vol. 11, no. (1), pp. 211–215, 2013.
J.-K. Oh, S. Lee, and C.-H. Lee, “Stereo vision based automation for a bin–picking solution,” International Journal of Control, Automation and Systems, vol. 10, no. (2), pp. 362–373, 2012.
D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” International Journal of Computer Vision, vol. 60, no. (2), pp. 91–110, 2004.
H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speededup robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. (3), pp. 346–359, 2008.
N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2005.
G. Mori, S. Belongie, and J. Malik, “Efficient shape matching using shape contexts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. (11), pp. 1832–1837, 2005.
J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust widebaseline stereo from maximally stable extremal regions,” Image and Vision Computing, vol. 22, no. (10), pp. 761–767, 2004.
A. C. Berg and J. Malik, “Geometric blur for template matching,” Computer Vision and Pattern Recognition, pp. 607–614, 2001.
A. Bosch, A. Zisserman, and X. Munoz, “Image classification using random forests and ferns,” Proc. Int’l Conf. Computer Vision, 2007.
Available: http://www.lmicorporation.com/277.html. Accessed on Jan. 15, 2018.
Available: http://www.nextsense.at/en/calipri/ applications–solutions/gap–measurement–fold–evaluation/ portable–gap–and–flush–measurement–system.php. Accessed on Jan. 15, 2018.
Available: https://www.autoevolution.com/news/fordinvests–100m–in–robots–with–special–vision–36089.html. Accessed on Jan. 15, 2018.
J. Forest, J. Salvi, E. Cabruja, and C. Pous, “Laser stripe peak detector for 3D scanners. a fir filter approach,” Proc. Int. Conf. Pattern Recognit., pp. 646–649, Aug, 2004.
S. Kumar, P. Tiwari, and S. Chaudhury, “An optical triangulation method for non–contact profile measurement,” IEEE International Conference on Industrial Technology 2006. ICIT 2006., pp. 2878–2883, Dec 2006.
Available: https://www.baslerweb.com/en/. Accessed on Jan. 15, 2018.
R. Tsai, “A versatile camera calibration technique for highaccuracy 3D machine vision metrology using off–the–shelf TV cameras and lenses,” IEEE Journal on Robotics and Automation, vol. 3, no. (4), pp. 323–344, 1987.
J. Heikkil and O. Silvn, “A four–step camera calibration procedure with implicit image correction,” Proc. IEEE Comput. Soc. Conf. CVPR, pp. 1106–1112, 1997.
T.-H. Wang, M.-C. Lu, W.-Y. Wang, and C. Y. Tsai, “Distance measurement using single non–metric CCD camera,” Proc. of the 7th WSEAS International Conference on Signal Processing, Computational Geometry & Artificial Vision, Athens, Greece, 2007.
Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. (11), pp. 1330–1334, 2000.
T.-T. Tran and C. Ha, “Extrinsic calibration of a camera and structured multi–line light using a rectangle,” International Journal of Precision Engineering and Manufacturing, vol. 19, no. (2), pp. 195–202, 2018.