The possibilities of exploiting the special structure of d.c. programs, which consist of optimising the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. These assume that either the convex or the concave part, or both, are evaluated by one of their subgradients. In this paper we propose an algorithm which allows the evaluation of both the concave and the convex part by their proximal points. Additionally, we allow a smooth part, which is evaluated via its gradient. In the spirit of primal-dual splitting algorithms, the concave part might be the composition of a concave function with a linear operator, which are, however, evaluated separately. For this algorithm we show that every cluster point is a solution of the optimisation problem. Furthermore, we show the connection to the Toland dual problem and prove a descent property for the objective function values of a primal-dual formulation of the problem. Convergence of the iterates is shown if this objective function satisfies the Kurdyka-Lojasiewicz property. In the last part, we apply the algorithm to an image processing model.
This paper concerns nonlinear semidefinite programming problems for which no convexity assumptions can be made. We derive first- and second-order optimality conditions analogous to those for nonlinear programming. Using techniques similar to those used in nonlinear programming, we extend existing theory to cover situations where the constraint matrix is structurally sparse. The discussion covers the case when strict complementarity does not hold. The regularity conditions used are consistent with those of nonlinear programming in the sense that the conventional optimality conditions for nonlinear programming are obtained when the constraint matrix is diagonal.
Computational methods are proposed for solving a convex quadratic program (QP). Active-set methods are defined for a particular primal and dual formulation of a QP with general equality constraints and simple lower bounds on the variables. In the first part of the paper, two methods are proposed, one primal and one dual. These methods generate a sequence of iterates that are feasible with respect to the equality constraints associated with the optimality conditions of the primal-dual form. The primal method maintains feasibility of the primal inequalities while driving the infeasibilities of the dual inequalities to zero. The dual method maintains feasibility of the dual inequalities while moving to satisfy the primal inequalities. In each of these methods, the search directions satisfy a KKT system of equations formed from Hessian and constraint components associated with an appropriate column basis. The composition of the basis is specified by an active-set strategy that guarantees the nonsingularity of each set of KKT equations. Each of the proposed methods is a conventional active-set method in the sense that an initial primal- or dual-feasible point is required. In the second part of the paper, it is shown how the quadratic program may be solved as a coupled pair of primal and dual quadratic programs created from the original by simultaneously shifting the simple-bound constraints and adding a penalty term to the objective function. Any conventional column basis may be made optimal for such a primal-dual pair of shifted-penalized problems. The shifts are then updated using the solution of either the primal or the dual shifted problem. An obvious application of this approach is to solve a shifted dual QP to define an initial feasible point for the primal (or vice versa). The computational performance of each of the proposed methods is evaluated on a set of convex problems from the CUTEst test collection.
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of thousands of variables. In the paper [Haarala, Miettinen, Mäkelä, Optimization Methods and Software, 19, (2004), pp. 673-692] we have described an efficient method for large-scale nonsmooth optimization. In this paper, we introduce a new variant of this method and prove its global convergence for locally Lipschitz continuous objective functions, which are not necessarily differentiable or convex. In addition, we give some encouraging results from numerical experiments.
We study the maximum budgeted allocation problem, i.e., the problem of selling a set of m indivisible goods to n players, each with a separate budget, such that we maximize the collected revenue. Since the natural assignment LP is known to have an integrality gap of (Formula presented.), which matches the best known approximation algorithms, our main focus is to improve our understanding of the stronger configuration LP relaxation. In this direction, we prove that the integrality gap of the configuration LP is strictly better than (Formula presented.), and provide corresponding polynomial time roundings, in the following restrictions of the problem: (i) the restricted budgeted allocation problem, in which all the players have the same budget and every item has the same value for any player it can be sold to, and (ii) the graph MBA problem, in which an item can be assigned to at most 2 players. Finally, we improve the best known upper bound on the integrality gap for the general case from (Formula presented.) and also prove hardness of approximation results for both cases.