Fixed Point Methods and Optimization

Electronic ISSN: 3008-1548

DOI: 10.69829/fpmo

Two modified self-adaptive dual ascent methods with logarithmic-quadratic proximal regularization for linearly constrained quadratic convex optimization

Fixed Point Methods and Optimization, Volume 2, Issue 1, April 2025, Pages 38–55

YUAN SHEN

School of Applied Mathematics, Nanjing University of Finance & Economics, Nanjing, 210023, P.R.China

MUYUN XU

School of Applied Mathematics, Nanjing University of Finance & Economics, Nanjing, 210023, P.R.China

CHANG LIU

Department of Accounting, Nanjing Vocational College of Finance and Economics, Nanjing, 210001, P.R. China

ZAIYUN PENG

School of Mathematics and Statistics, Chongqing JiaoTong University, Chongqing, 400074, P.R.China


Abstract

Dual ascent method (DAM) is an effective algorithm to handle a class of convex optimization problems with linear constraint. For problems with non-negative orthant constraints, logarithmic quadratic proximal (LQP) method can solve well by transforming the sub-problems into nonlinear equations. The LQP term is applied to regularize the subproblems of DAM in this article, so a DAM-LQP method is developed for solving both linearly constrained and non-negative constrained optimization problems, and further extend the proposed method to solve separable convex optimization problem with two blocks. When the objective function is quadratic, the convergence of proposed methods can be guaranteed better; also, we can solve the subproblems of the convex optimization problem parallelly when parallel computation devices are available, thus the computation time in one iteration could be greatly reduced. For the sake of demonstrating the efficiency of proposed methods, numerical results are proposed to verify.


Cite this Article as

Yuan Shen, Muyun Xu, Chang Liu, and Zaiyun Peng, Two modified self-adaptive dual ascent methods with logarithmic-quadratic proximal regularization for linearly constrained quadratic convex optimization, Fixed Point Methods and Optimization, 2(1), 38–55, 2025