Matrix scaling in linear and convex programming
Bahman Kalantari
Diagonal scaling in linear
programming became a prominent technique with
the announcement of Karmarkar
algorithm. In the context of nonnegative
matrices, doubly stochastic diagonal scaling
had been a subject of
research for more than half a century prior
to the emergence of
interior-point methods for LP. The link between
these two otherwise
disjoint fields is diagonal scaling of a
symmetric positive semidefinite
matrix into a doubly quasi-stochastic
matrix. The complexity of the
latter problem was studied by Khachiyan and Kalantari who also
used it
to give, as a by-product, a simple
unique path-following algorithm for
LP. In this talk not only do we wish to
promote and justify the
significance and relevance of positive semidefinite matrix scaling in
linear programming, but of its
generalization in semidefinite as well
as self-concordant programming. These
are from the point of view of
pedagogical, foundational, theoretical,
algorithmic, and possibly even
practical considerations.