Abstract
|
We propose a random coordinate descent algorithm for optimizing a non-convex objective
function subject to one linear constraint and simple bounds on the variables. Although it
is common use to update only two random coordinates simultaneously in each iteration of
a coordinate descent algorithm, our algorithm allows updating arbitrary number of coordinates. We provide a proof of convergence of the algorithm. The convergence rate of the
algorithm improves when we update more coordinates per iteration. Numerical experiments
on large scale instances of different optimization problems show the benefit of updating many
coordinates simultaneously.
|