In collaboration with Payame Noor University and the Iranian Society of Instrumentation and Control Engineers

Document Type : Research Article

Authors

Department of Basic Sciences‎, ‎Technical and Vocational University (TVU)‎, ‎Tehran‎, ‎Iran‎.

10.30473/coam.2025.74997.1317

Abstract

The conjugate gradient ({CG}) method is one of the simplest and most widely used approaches for unconstrained optimization, and our focus is on two-dimensional problems with numerous practical applications. We devise three hybrid {CG} methods in which the hybrid parameter is constructed from the Barzilai–Borwein process, and in these hybrids, the weaknesses of each constituent method are mitigated by the strengths of the others. The conjugate gradient parameter is formed as a linear combination of two well-known CG parameters, blended by a scalar, enabling our new methods to solve the targeted problems efficiently. Under mild assumptions, we establish the descent property of the generated directions and prove the global convergence of the hybrid schemes. Numerical experiments on ten practical examples indicate that the proposed hybrid {CG} methods outperform standard {CG} methods for two-dimensional unconstrained optimization.

Highlights

  • Three hybrid conjugate gradient (CG) methods for two-dimensional unconstrained optimization.
  • Each hybrid method constructs the CG parameter as a linear combination of two classical CG parameters, blended via a scalar, with the Barzilai–Borwein (BB) process guiding the hybridization.
  • The hybrids are designed to overcome the limitations of individual CG methods by pairing their complementary advantages.
  • Established descent property for the generated search directions and proven global convergence under mild assumptions.
  • Numerical experiments on ten practical 2D problems demonstrate that the hybrid CG methods outperform standard CG in terms of iterations, function evaluations, and CPU time.
  • Demonstrates that combining CG strategies in pairwise configurations can consistently enhance iterative performance for unconstrained optimization.

Keywords

Main Subjects

[1] Barzilai, J., Borwein, J.M. (1988). “Two-point step size gradient method”. IMA Journal of Numerical Analysis, 8, 141-148, doi:https://doi.org/10.1093/imanum/8.1.141.
[2] Dai, Y.H., Liao, L.Z. (2001). “New conjugacy conditions and related nonlinear conjugate gradient methods”. Applied Mathematics and Optimization, 43, 87-101, doi:https://doi.org/10.1007/s002450010019.
[3] Dai, Y.H., Yuan, Y. (1999). “A nonlinear conjugate gradient method with a strong global convergence property”. SIAM Journal on Optimization, 10(1), 177-182, doi:https://doi.org/10.1137/S1052623497318992.
[4] Diphofu, T., Kaelo, P., Kooepile-Reikeletseng, S., Koorapetse, M., Sam, C.R. (2024). “Convex combination of improved Fletcher-Reeves and Rivaie-Mustafa-Ismail-Leong conjugate gradient methods for unconstrained optimization problems and applications”. Quaestiones Mathematicae, 47(12), 2375–2397, doi:https://doi.org/10.2989/16073606.2024.2367715.
[5] Fletcher, R. (2000). “Practical Methods Of Optimization: Vol. 1 Unconstrained Optimization”. 2nd ed., John Wiley and Sons, Chichester, doi:https://onlinelibrary.wiley.com/doi/book/10.1002/9781118723203.
[6] Fletcher, R., Reeves, C. (1964). “Function minimization by conjugate gradients”. The Computer Journal, 7(2), 149-154, doi:https://doi.org/10.1093/comjnl/7.2.149.
[7] Hager, W.W., Zhang, H. (2005). “A new conjugate gradient method with guaranteed descent and an efficient line search”. SIAM Journal on Optimization, 16(1), 170-192, doi:https://doi.org/10.1137/030601880.
[8] Hager, W., Zhang, H. (2006). “A survey of nonlinear conjugate gradient methods”. Pacific Journal of Optimization, 2(1), 35-58.[9] Hestenes, M.R., Stiefel, E.L. (1952). “Methods of conjugate gradients for solving linear systems”. Journal of Research of the National Institute of Standards and Technology, 49(6), 409-436, doi:
https://doi.org/10.6028/jres.049.044.
[10] Liu, Y.L., Storey, C. (1991). “Efficient generalized conjugate gradient algorithms, Part 1: Theory”. Journal of Optimization Theory and Applications, 69(1), 129-137, doi:https://doi.org/10.1007/BF00940464.
[11] Mustafa, A.A. (2023). “New spectral LS conjugate gradient method for nonlinear unconstrained optimization”. International Journal of Computer Mathematics, 100(4), 838-846, doi:https://doi.org/10.1080/00207160.2022.2163165.
[12] Nocedal, J., Wright, S.J. (2006). “Numerical optimization”. Springer-Verlag, New York, NY, doi:https://doi.org/10.1007/978-0-387-40065-5.
[13] Polyak, B.T. (1969). “The conjugate gradient method in extreme problems”. USSR Computational Mathematics and Mathematical Physics, 9(4), 94-112, doi:https://doi.org/10.1016/0041-5553(69)90035-4.
[14] Polyak, E., Ribière, G. (1969). “Note sur la convergence de directions conjugées”. Revue française d'informatique et de recherche opérationnelle. Série rouge, 3(R1), 35-43.
[15] Rahpeymaii, F., Rostami, M. (2018). “A new hybrid conjugate gradient method based on eigenvalue analysis for unconstrained optimization problems”. Control and Optimization in Applied Mathematics, 3(1), 27-43, doi:https://doi.org/10.30473/coam.2019.44564.1108.
[16] Rivaie, M., Mamat, M., Abashar, A. (2015). “A new class of nonlinear conjugate gradient coefficients with exact and inexact line searches”. Applied Mathematics and Computation, 268, 1152-1163, doi:https://doi.org/10.1016/j.amc.2015.07.019.
[17] Rivaie, M., Mamat, M., June, L.W., Mohd, I. (2012). “A new class of nonlinear conjugate gradient coefficients with global convergence properties”. Applied Mathematics and Computation, 218(22), 11323-11332, doi:https://doi.org/10.1016/j.amc.2012.05.030.