Berkeley CSUA MOTD:Entry 20547
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2025/07/08 [General] UID:1000 Activity:popular
7/8     

2001/2/9-11 [Computer/Theory] UID:20547 Activity:high
2/8     I have 20 positive numbers and five desired totals (all floating point
        numbers).  I need to pick five mutually exclusive sets (not necessarily
        exhausive) of numbers from the 20 numbers, and then sum up the numbers
        in the five sets to get five totals.  My goal is to minimize the
        mean-of-squares of the differences between the five totals and the five
        desired totals.  I wrote a C program to do the brute-force try-all-
        combinations method (that's (5+1)^20 combinations), and it looks like
        it's going to need ~120 years of CPU time on my P-II 350MHz!  Is there
        any better algorithm for doing this kind of things?  Thanks.  -- yuen
        \_ why isn't it 20!/(20-5)! ?
        \_ This is known as a 'bin packing problem.'  It is NP-hard (in other
           words it is at least as hard as any problem in NP). -- ilyas
        \_ exclusive means (20! / (5! * (20-5)!)) right?
           \_ No, that would be the # of combinations to pick one set of five
              numbers from the 20 numbers.  But in my case I need to pick five
              exclusive (ie. non-overlapping) sets instead of one, and each
              set can contain anywhere from zero to 20 numbers.  -- yuen
        \_ Can you use an approximate solution?  I think of this problem
           as being more similar to integer programming.  It's equivalent
           to min_A ||Ax-b||^2, such that 1^T A <= 1, where x is your
           vector of 20 positive numbers, b is the five desired sum, and
           A is a 5x20 matrix with the entries constrained to be either
           zero or 1.  You won't be able to reach the exact global optimum
           in poly time, but perhaps you can try adding an L_1 constraint
           to the objective?  i.e.
               min_A ||Ax-b||^2 + lambda*sum_ij |A_ij|, s.t. 1^T A <= 1
           L_1 penalty punishes any non-zero values, so A will have as
           many non-zero values as possible.  Of course you're still
           left with the problem of deciding which entries of A to clamp
           to 1...  Uh, need more optimization fu.  -- alice
           \_ If my memory serves me right, LP relaxation isn't all that great
              for bin-packing. The first reference point to check would be
              _Approximation_Algorithms_For_NP-Hard_Problems_, edited by Dorit
              Hochbaum:
              http://www.ieor.berkeley.edu/~hochbaum/html/book-aanp.html
              It should be available at any university library by now. Chapter
              2, written by Coffman, Garey, and Johnson (yes, THOSE Garey and
              Johnson), is "Approximation Algorithms for Bin Packing: A Survey"
              -alexf
              \_ I'm not up to date with NP approximation algorithms,
                 but LP might be the easiest (and most practical) to
                 implement than more complicated methods.  BTW, one can
                 express 0/1 constraints using (2A_ij-1)^2 = 1.  The
                 problem then is no longer LP, but you can get a lower
                 bound using Lagrange relaxation (i.e.  switching min & max),
                 then use gradient descent or something on the lower bound
                 function, which is usually much more tame than the original
                 problem.  This all sounds very complicated but is actually
                 doable.  I just don't know how good the approximation is.
                 Maybe the book will say something about it.  -- alice
                 \_ Thanks for all the suggestions.  Understanding what you
                    all said is already hard enough.  Gotta refresh my
                    170-series knowledge first before I go from here.  -- yuen
                 \_ can you think of a useful lower bound for what you're
                    suggesting? you are talking about lower bounding
                    |Ax-b|^2 + l*(|2A-1|^2-1) right? I'd bet that your
                    regularization term smooths out the surface so much
                    that you could just solve this using gradient descent.
                    that you could just solve this using newton raphson.
                 \_ this is one smart broad.
Cache (630 bytes)
www.ieor.berkeley.edu/~hochbaum/html/book-aanp.html
Such problems are commonly addressed with heuristics that provide a solution, but not information on the solution's quality. The approximation algorithms' framework provides a guarantee on the quality of the solution obtained. This framework has been used as a guide to developing algorithms in specific problem areas with increasingly improved performance. The book describes the state-of-the-art algorithms in each specialized area and reviews the most effective technical tools used. The thirteen chapters of the book are written by leading researchers that have contributed to the state of the art of approximation algorithms.