Online learning for DBC segmentation of new IGBT samples based on computed laminography imaging

Yan Li, Man Luo, Xuan Fei, Shuangquan Liu, Cunfeng Wei

Tóm tắt

AbstractInsulated gate bipolar transistor (IGBT) is a power semiconductor module .Voids may arise in its solder process when a contaminant or gas is absorbed into the solder joint. They heavily influence the heat exchange efficiency of IGBT, so void inspection is very important. The segmentation of solder region is a crucial step for automated defect detection of IGBT based on x-ray computed laminography (CL) system. In recent years, deep learning has made remarkable process in semantic segmentation and has been used for the segmentation of solder joint between the direct bonded copper (DBC) substrate and baseplate, which has been proved to be accurate and efficient. However, deep learning architectures exhibit a critical drop of performance due to catastrophic forgetting when new IGBT samples encountered. Hence, this paper proposes to use online learning techniques to continuously improve the learned model by feeding new IGBT samples without losing previously learned knowledge.

Từ khóa


Tài liệu tham khảo

Khanna VK. Insulated gate bipolar transistor IGBT theory and design. Hoboken: Wiley-IEEE; 2004.

Xu L, Zhou Y, Zhang Z, Chen M, Liu S. Influence of solder void to thermal distribution of IGBT module. J CAEIT. 2014;9(2):125–9.

Buzug T. Computed tomography: from photon statistics to modern cone-beam CT. Medical Physics. 2009;36(8):3858.

Said AF, Bennett BL, Karam LJ, Pettinato J. Automated void detection in solder balls in the presence of vias and other artifacts. In: IEEE transactions on components, packaging and manufacturing technology. 2012;2(11):1890-1901.

van Veenhuizen M. Void detection in solder bumps with deep learning. Microelectron Reliab. 2018;88:315–20.

Liu BD, Wei CF, Wei L, Hu SH. Computed laminography system for IGBT testing. In: 15th Asia Pacifc conference for non-destructive testing. 2017.

Su L, Yu X, Li K, Pecht MJMR. Defect inspection of flip chip solder joints based on non-destructive methods: a review. Microelectronics Reliability. 2020;110:113677.

Wei ZH, Yuan LL, Liu BD, Wei CF, Sun CL, Yin PF, et al. A micro-CL system and its applications. Rev Sci Instrum. 2017;88:11.

Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Commun ACM. 2017;60(6):84–90.

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv. 2015. https://doi.org/10.48550/arXiv.1409.1556.

Szegedy C, Liu W, Jia YQ, et al. Going Deeper with Convolutions. Comput Vision Pattern Recognit (CVPR). 2015. https://doi.org/10.48550/arXiv.1409.4842.

French RM. Catastrophic forgetting in connectionist networks. Trends Cognit Sci. 1999;3(4):128–35.

Goodfellow IJ, Mirza M, Xiao D, Courville A, Bengio Y. An empirical investigation of Catastrophic forgetting in gradient-based neural networks. arXiv. 2013. https://doi.org/10.48550/arXiv.1312.6211.

McCloskey M, Cohen NJ. Catastrophic interference in connectionist networks: the sequential learning problem. Psychol Learn Motiv–Adv Res Theory. 1989;24:109–65.

Caruana R. Multitask learning. Mach Learn. 1997;28:41–75.

Istrate R, Malossi ACI, Bekas C, Nikolopoulos D. Incremental training of deep convolutional neural networks. arXiv. 2018. https://doi.org/10.48550/arXiv.1803.10232.

Roy D, Panda P, Roy K. Tree-CNN: a hierarchical deep convolutional neural network for incremental learning. Neural Netw. 2020;121:148–60.

Sarwar SS, Ankit A, Roy K. Incremental learning in deep convolutional neural networks using partial network sharing. Ieee Access. 2020;8:4615–28.

Xiao T, Zhang J, Yang K, Peng Y, Zhang Z. Error-driven incremental learning in deep convolutional neural network for large-scale image classification. In: Proceedings of the 22nd ACM international conference on multimedia. 2014;177–86.

Kirkpatricka J, et al. Overcoming catastrophic forgetting in neural networks. Proc Natl Acad Sci USA. 2017;114(13):3521–26.

Li ZZ, Hoiem D. Learning without forgetting. IEEE Trans Pattern Anal Mach Intell. 2018;40(12):2935–47.

Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. Neural Inform Proc Syst (NIPS) Deep Learn Represent Learn Workshop. 2015. https://doi.org/10.48550/arXiv.1503.02531.

Castro FM, Marín-Jiménez MJ, Guil N, Schmid C, Alahari K. End-to-end incremental learning. In: Proceedings of European conference on computer vision (ECCV); 2018. p. 233–48.

Shmelkov CSK, Alahari K. Incremental learning of object detectors without catastrophic forgetting. In: Proceedings of international conference on computer vision (ICCV); 2017. p. 3400–9.

Rebuffi SA, Kolesnikov A, Sperl G, Lampert CH. icarl: Incremental classifier and representation learning. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR); 2017. p. 2001–10.

Li Y, Liu SQ, Li CM, Zheng YS, Wei CF, Liu BD, et al.. Automated defect detection of insulated gate bipolar transistor based on computed laminography imaging. Microelectronics Reliability. 2020;115:113966.

Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. Med Image Comput Comput-Assisted Interven. 2015;9351(Pt Iii):234–41.

Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for scene segmentation. IEEE Trans Pattern Anal Mach Intell. 2017;39(12):2481–95.

Chollet F. Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 1251–8.

He KM, Zhang XY, Ren SQ, Sun J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell. 2015;37(9):1904–16.

Hariharan B, Arbelaez P, Girshick R, Malik J. Hypercolumns for object segmentation and fine-grained localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2015. p. 447–56.

Kim YD, Park E, Yoo S, Choi T, Yang L, Shin DJCS. Compression of deep convolutional neural networks for fast and low power mobile applications. arXiv. 2015. https://doi.org/10.48550/arXiv.1511.06530.

Wen W, Wu CP, Wang YD, Chen YR, Li H. Learning structured sparsity in deep neural networks. In: Advances in neural information processing systems; 2016. p. 29.

Han S, Liu X, Mao H, Pu J, Pedram A, Horowitz MA, et al. EIE: efficient inference engine on compressed deep neural network. ACM SIGARCH Comput Architect News. 2016;44(3):243–54.

Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV). 2018; p. 801–18.

He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision (ICCV); 2016. p. 1026–34

Bottou L. Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT’2010: 19th International conference on computational statistics; 2010. p. 177–86.

Chen LC, Papandreou G, Schroff F, Adam H. Rethinking Atrous convolution for semantic image segmentation. 2017. https://doi.org/10.48550/arXiv.1706.05587.