11/6 Strongly polynomial time algorithm found for linear programming.
News at 11.
\_ Uhm, that was 1979. Or are you using some non-standard definition
of "strongly"?
\_ 1979 ellipsoid algorithm was weak polynomial time, i.e. it was
polynomial in the size of the constraint matrix, not polynomial
in the size of largest number in the matrix. The current
algorithm is polynomial in both, which is the standard definition.
What definition are you using?
algorithm is polynomial in both, which is the standard
definition. What definition are you using?
\_ The same one; I hadn't considered the numerical behavior
of the data. So, enlighten me, are you saying the ellipsoid
algorithm can't handle in polytime, say, a matrix with
floats in [0,1]?
\_ No, the ellipsoid algorithm can't handle in
polytime a matrix with really really large
numbers. An easy example of weakly polytime
problem is factoring a dot product vector of
integers. If the integers are small, this problem
is clearly solvable in time linear in the length
of the vector. The problem is that factoring
integers is hard, so there are no (known)
solutions which have polynomial runtime in the
size of the largest integer in the vector. The
ellipsoid algorithm has a similar problem with
numbers in the constraint matrix.
\_ You misunderstood my question. Yes, I know what you
mean by "strongly polytime." What about LP over data
in a fixed range of reals?
mean by "strongly polytime." What about LP when all
parameters are in a fixed range of reals?
\_ I am not sure. If the float is very small, or has
high precision, it will take a lot of bits to
represent it. It may be that in fact the ellipsoid
algorithm has exponential behavior for any number
that needs a lot of bits, large or small. (Note
that this clearly isn't true for factoring).
\- is this one of stephen smale's problems for the next
century? --psb |