Lagrange-type Functions in Constrained Non-Convex Optimization

Lagrange and penalty function methods provide a powerful approach, both as a theoretical tool and a computational vehicle, for the study of constrained optimization problems. However, for a nonconvex constrained optimization problem, the classical Lagrange primal-dual method may fail to find a mini­...

Full description

Bibliographic Details
Main Authors: Rubinov, Alexander M., Xiao-qi Yang (Author)
Format: eBook
Language:English
Published: New York, NY Springer US 2003, 2003
Edition:1st ed. 2003
Series:Applied Optimization
Subjects:
Online Access:
Collection: Springer Book Archives -2004 - Collection details see MPG.ReNa
LEADER 02716nmm a2200349 u 4500
001 EB000616670
003 EBX01000000000000000469752
005 00000000000000.0
007 cr|||||||||||||||||||||
008 140122 ||| eng
020 |a 9781441991720 
100 1 |a Rubinov, Alexander M. 
245 0 0 |a Lagrange-type Functions in Constrained Non-Convex Optimization  |h Elektronische Ressource  |c by Alexander M. Rubinov, Xiao-qi Yang 
250 |a 1st ed. 2003 
260 |a New York, NY  |b Springer US  |c 2003, 2003 
300 |a XIV, 286 p  |b online resource 
653 |a Operations Research, Management Science 
653 |a Operations research 
653 |a Optimization 
653 |a Management science 
653 |a Convex geometry  
653 |a Convex and Discrete Geometry 
653 |a Discrete geometry 
653 |a Mathematical optimization 
700 1 |a Xiao-qi Yang  |e [author] 
041 0 7 |a eng  |2 ISO 639-2 
989 |b SBA  |a Springer Book Archives -2004 
490 0 |a Applied Optimization 
028 5 0 |a 10.1007/978-1-4419-9172-0 
856 4 0 |u https://doi.org/10.1007/978-1-4419-9172-0?nosfx=y  |x Verlag  |3 Volltext 
082 0 |a 519.6 
520 |a Lagrange and penalty function methods provide a powerful approach, both as a theoretical tool and a computational vehicle, for the study of constrained optimization problems. However, for a nonconvex constrained optimization problem, the classical Lagrange primal-dual method may fail to find a mini­ mum as a zero duality gap is not always guaranteed. A large penalty parameter is, in general, required for classical quadratic penalty functions in order that minima of penalty problems are a good approximation to those of the original constrained optimization problems. It is well-known that penaity functions with too large parameters cause an obstacle for numerical implementation. Thus the question arises how to generalize classical Lagrange and penalty functions, in order to obtain an appropriate scheme for reducing constrained optimiza­ tion problems to unconstrained ones that will be suitable for sufficiently broad classes of optimization problems from both the theoretical and computational viewpoints. Some approaches for such a scheme are studied in this book. One of them is as follows: an unconstrained problem is constructed, where the objective function is a convolution of the objective and constraint functions of the original problem. While a linear convolution leads to a classical Lagrange function, different kinds of nonlinear convolutions lead to interesting generalizations. We shall call functions that appear as a convolution of the objective function and the constraint functions, Lagrange-type functions