Các phương pháp dựa trên bộ nhớ để loại bỏ sự hội tụ sớm trong tối ưu hóa bầy đàn hạt

Springer Science and Business Media LLC - Tập 51 - Trang 4575-4608 - 2021
K. Chaitanya1, D. V. L. N Somayajulu2, P. Radha Krishna2
1Infosys Limited, Hyderabad, India
2National Institute of Technology, Warangal, India

Tóm tắt

Tối ưu hóa bầy đàn hạt (Particle Swarm Optimization - PSO) là một phương pháp tính toán, trong đó một nhóm các hạt di chuyển trong không gian tìm kiếm để tìm ra giải pháp tối ưu. Trong quá trình di chuyển này, mỗi hạt cập nhật vị trí và vận tốc của nó dựa trên vị trí tốt nhất trước đó và vị trí tốt nhất mà bầy đã tìm thấy. Mặc dù PSO được coi là một giải pháp tiềm năng và được áp dụng trong nhiều lĩnh vực, nhưng nó gặp phải vấn đề hội tụ sớm, trong đó tất cả các hạt hội tụ quá sớm, dẫn đến kết quả không tối ưu. Mặc dù có một số kỹ thuật để giải quyết vấn đề hội tụ sớm, nhưng đạt được tỷ lệ hội tụ cao hơn trong khi tránh được hội tụ sớm vẫn là một thách thức. Trong bài báo này, chúng tôi trình bày hai biến thể mới dựa trên bộ nhớ của PSO nhằm ngăn chặn sự hội tụ sớm. Kỹ thuật đầu tiên (PSOMR), tăng cường bộ nhớ bằng cách tận dụng các khái niệm của đường cong quên Ebbinghaus. Kỹ thuật thứ hai (MS-PSOMR) chia bầy thành nhiều tiểu bầy khác nhau. Cả hai kỹ thuật đều sử dụng bộ nhớ để lưu trữ các giá trị lịch sử đầy hứa hẹn và sử dụng chúng sau này để tránh hội tụ sớm. Các phương pháp được đề xuất được so sánh với các thuật toán hiện có thuộc cùng loại và đánh giá trên các hàm chuẩn CEC 2010 và CEC 2017. Kết quả cho thấy cả hai phương pháp đều đạt hiệu suất tốt hơn đáng kể cho các chỉ số đo lường và làm giảm hội tụ sớm.

Từ khóa

#Tối ưu hóa bầy đàn hạt #hội tụ sớm #biến thể dựa trên bộ nhớ #kỹ thuật PSOMR #kỹ thuật MS-PSOMR

Tài liệu tham khảo

J Kennedy, R. Eberhart (1995) Particle swarm optimization. In Proc. of IEEE International Conference on Neural Networks: 1942–1948 Pluhacek M., Senkerik R., Viktorin A., Kadavy T., Zelinka I. (2018) A review of real-world applications of particle swarm optimization algorithm. In: Duy V., Dao T., Zelinka I., Kim S., Phuong T. (eds) AETA 2017 - Recent Advances in Electrical Engineering and Related Sciences: Theory and Application. AETA 2017. Lecture notes in electrical engineering, vol 465. Springer, Cham. Wachowiak MP, Smoliková R, Zheng YF, Zurada JM, Elmaghraby AS (2004) An approach to multimodal biomedical image registration utilizing particle swarm optimization. IEEE Trans Evol Comput 8(3):289–301 del Valle Y, Venayagamoorthy GK, Mohagheghi S, Hernandez JC, Harley RG (2008) Particle swarm optimization: basic concepts, variants and applications in power systems. IEEE Trans Evol Comput 12(2):171–195 W Lin, X. Gu, Z Lian, Y Xu, B Jiao ( 2013) A self-government particle swarm optimization algorithm and its application.Texaco gasification. Journal of Software 8(2):472–479 Chen WN, Zhang J, Lin Y, Chen N, Zhan ZH, Chung H, Li Y, Shi YH (2013) Particle swarm optimization with an aging leader and challengers. IEEE Trans Evol Comput 17(2):241–258 Xu G (2013) An adaptive parameter tuning of particle swarm optimization algorithm. Appl Math Comput 219(9):4560–4569 Rezaei F, Safavi HR (2020) GuASPSO: a new approach to hold a better exploration–exploitation balance in PSO algorithm. Soft Comput 24:4855–4875 Nickabadi A, Ebadzadeh MM, Safabakhsh R (2011) A novel particle swarm optimization algorithm with adaptive inertia weight. Appl Soft Comput 11(4):3658–3670 Ratnaweera A, Halgamuge S, Watson HC (2004) Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput 8(3):240–255 Liu Y., Zhao Q., Shao Z., Shang Z., Sui C. (2009) Particle swarm optimizer based on dynamic neighborhood topology. In: Huang DS., Jo KH., Lee HH., Kang HJ., Bevilacqua V. (eds) Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence. ICIC 2009. Lecture notes in computer science, vol 5755. Springer, Berlin, Heidelberg Liu Z, Li H, Zhu P (2019) Diversity enhanced particle swarm optimization algorithm and its application in vehicle lightweight design. J Mech Sci Technol 33:695–709 Zhang, Y., Gong, D., Sun, X (2014) Adaptive bare-bones particle swarm optimization algorithm and its convergence analysis in Soft Computing 18:1337–1352 R Tang, Y Fang (2015) Modification of particle swarm optimization with human simulated property in Neurocomputing 153: 319–331 Zhang Z, Ding XM (2011) A multi-swarm self-adaptive and cooperative particle swarm optimization. Eng Appl Artif Intell 24(6):958–967 Yen GG, Leong WF (2009) Dynamic multiple swarms in multiobjective particle swarm optimization. IEEE Trans Syst Man Cybern Syst Hum 39(4):890–911 Xua X, Tang Y, Li J, Hua C, Guan X (2015) Dynamic multi-swarm particle swarm optimizer with cooperative learning strategy. Appl Soft Comput 29:169–183 Zhao SZ, Suganthan PN, Pan QK, Fatih Tasgetiren M (2011) Dynamic multi-swarm particle swarm optimizer with harmony search. Exp Syst Appl 38(4):3735–3742 van den Bergh F (2001) An analysis of particle swarm optimizers. University of Pretoria, Pretoria van den Bergh F, Engelbrecht AP (2010) A convergence proof for the particle swarm Optimiser. Fundam Inf 105(4):341–374 Hu X, Eberhart RC, Shi Y (2003) Particle swarm with extended memory for multiobjective optimization. In: Proceedings of the IEEE swarm intelligence symposium (SIS). Indianapolis, IN, USA, pp 193–197 Kudělka M, Horák Z, Snášel V, Krömer P, Platoš J, Abraham A (2012) Social and swarm aspects of co-authorship network. Logic Journal of IGPL Advance Access 20:634–643 Bennett AG, Rebello NS (2012) Retention and learning. In: Seel NM (ed) Encyclopedia of the sciences of learning. Springer, Boston, MA Bergh VF, Engelbrecht AP (2004) A cooperative approach to particle swarm optimization. IEEE Trans Evol Comput 8(3):225–239 Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: Simpler, maybe better. IEEE Transactions on Evolutionary Computation 8(3):204–210 Liang J, Qin AK, Suganthan PN, Baskar S (2006) Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans Evol Comput 10(3):281–295 Huang H, Lv L, Ye S (2019) Particle swarm optimization with convergence speed controller for large-scale numerical optimization. Soft Comput 23:4421–4437 Li Y, Gui W, Yang C (2005) Improved PSO algorithm and its application. Journal of the Central South University of Technology 12:222–226 Arani BO, Mirzabeygi P, Panahi MS (2013) An improved PSO algorithm with a territorial diversity-preserving scheme and enhanced exploration-exploitation balance. In Swarm and Evolutionary Computation 11:1–15 C Coello, M Lechuga (2002) MOPSO: A proposal for multiple objective particle swarm optimization. In IEEE Congress on Evolutionary Computation. (CEC) IEEE Computer Society Washington, DC, USA : 1051–1056 H Wang, D Wang, S Yang (2007) Triggered memory-based swarm optimization in dynamic environments. Applications of Evolutionary Computing. EvoWorkshops: 637–646 Acan A, Gunay A (2005) Enhanced particle swarm optimization through external memory support. In: IEEE congress on evolutionary computation. Vancouver, Canada, pp 1875–1882 Acan, A Unveren (2009) A memory-based colonization scheme for particle swarm optimization. In IEEE Congress on Evolutionary Computation (CEC), Piscataway, NJ:1965–1972 Shahriar Asta, A sima Uyar (2011) A novel particle swarm optimization algorithm. 10th international conference on Artificial Evolution Li J, Zhang J, Jiang C, Zhou M (2015) Composite particle swarm optimizer, with historical memory for function optimization. IEEE Transactions on Cybernetics 45(10):2168–2267 Acan A, Ünveren A A two-stage memory powered Great Deluge algorithm for global optimization. Soft Computing 19(9):2565–2585 Li W (2018) Improving particle swarm optimization based on neighborhood and historical memory for training multi-layer perceptron. Information 9(16) Broderick I, Howley E (2014) Particle swarm optimisation with enhanced memory particles. In: Dorigo M. et al. (eds) Swarm Intelligence. ANTS 2014. Lecture notes in computer science, 8667. Springer, Cham S. Z. Zhao, J. J. Liang, P. N. Suganthan and M. F. Tasgetiren (2008) Dynamic multi-swarm particle swarm optimizer with local search for Large Scale Global Optimization. IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong: 3845–3852, Dongping Tian, Zhongzhi Shi. MPSO (2018) Modified particle swarm optimization and its applications. Swarm and Evolutionary Computation. (41): 49–68 Lynn N, Suganthan PN (2015) Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm and Evolutionary Computation 24:11–24 Nandar Lynn Ponnuthurai Nagaratnam Suganthan (2017) Ensemble particle swarm optimizer. Appl Soft Comput 55:533–548 Song X, Zhang Y, Guo Y, Sun X, Wang Y (2020) Variable-size cooperative Coevolutionary particle swarm optimization for feature selection on high-dimensional data. IEEE Trans Evol Comput 24(5):882–895 Zhang Y, Li H, Wang Q et al (2019) A filter-based bare-bone particle swarm optimization algorithm for unsupervised feature selection. Appl Intell 49:2889–2898 Xia X, Tang Y, Wei B (2020) Dynamic multi-swarm global particle swarm optimization. Computing 102:1587–1626 Piotrowski AP, Napiorkowski JJ, Piotrowska AE (2020) Population size in particle swarm optimization. Swarm and Evolutionary Computation 58:1–18 K. Tang, X.D. Li, P.N. Suganthan, Z.Y. Yang, T. Weise, Benchmark functions for the CEC'2010 special session and competition on large-scale global optimization, in Proceedings of the Nature Inspired Computation and Applications Laboratory, Wu G, Mallipeddi R, Suganthan PN (2016) Problem definitions and evaluation criteria for the CEC 2017 competition and special session on constrained single objective real-parameter optimization. Nanyang Technological University, Singapore, Technical Report Wilcoxon F (1945) Individual comparisons by ranking methods. Biom Bull 1(6):80–83 Friedman M (1940) A comparison of alternative tests of significance for the problem of m rankings. Ann Math Stat 11:86–92 Iman R, Davenport J (1980) Approximations of the critical region of the Friedman statistic. Communications in Statistics 9:571–595 Friedman M (1937) The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc 32:674–701 Quade D (1979) Using weighted rankings in the analysis of complete blocks with additive block effects. J Am Stat Assoc 74:680–683 Derrac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of non-parametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation 1:3–18 Tangherloni A, Rundo L, Nobile MS (2017) Proactive particles in swarm optimization: a settings-free algorithm for real-parameter single objective optimization problems. Proc IEEE Congr Evol Comput:1940–1947