Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Science Technology

The Secret of the Simplex Algorithm Discovered 65

prostoalex writes "While the Simplex algorithm is considered to be one of the most widely used algorithms in complex networks, the reason for its efficiency has been so far not too clear. Daniel Spielman and Shanghua Teng discovered the secret of why the Simplex algorithm works so well by introducing imprecision into the worst-case scenario analysis. Their article will be published in Journal of ACM, although MIT Technology Review at the aforementioned link quotes Spielman expressing his doubts whether anyone will be able to make it through 80-page document filled with equations and formal explanations of the method."
This discussion has been archived. No new comments can be posted.

The Secret of the Simplex Algorithm Discovered

Comments Filter:
  • Oh really ? (Score:2, Interesting)

    by Networkpro ( 602073 )
    Euler was wrong?? I don't think that argument holds water. While 80 pages is long for a proof it looks like more of a philosophical diatribe than anything else.
  • by recursiv ( 324497 ) on Wednesday May 28, 2003 @12:06PM (#6058408) Homepage Journal
    But what is the simplex algorithm? Or at least, what problem does it solve? I've never heard of it before, and the linked description is either not describing what the algorithm actually does or is too dense for me to understand.

    The idea of an algorithm that is used on all kinds of major networks, but no one knows why it works sounds rather intriguing, but can anyone offer any background?

    Thanks in advance.
    • by grmoc ( 57943 ) on Wednesday May 28, 2003 @12:14PM (#6058529)
      The simplex algorithm is an algorithm for finding an optimal solution to a set of inequalities. (i.e. a system of constraints)

      I say 'an optimal solution' because there can be more than one.

      If viewed in space, the solution set for a simplex problem is a convex, generally closed region of space. In isn't neccessarily closed, but it is necessarily convex if it is closed, and it is never concave.

      The simplex algorithm is an algorithm whereby some of the points/vertices of that solution space are visited in a search, until an optimal vertex (i.e. solution) is discovered.

      It isn't completely fool-proof, you can get into states where the algorithm bounces between two vertices forever (looping), but for most well-stated linearly constrained problems (in any number of dimensions, really), the simplex is a good way to find an optimal solution.

      Sorry, for the rather rambling explanation.

      If you're truely interested, do a web search for "simplex method"
      • by ChadN ( 21033 ) on Wednesday May 28, 2003 @01:16PM (#6059192)
        One of the more interesting pieces of 'lore' about the inventor of the simplex method, George Dantzig, is that while in graduate school he showed up late for class and wrote down the pair of problems on the board, thinking they were an assignment.

        He struggled for a while, and finally turned his results into to the professor. However, the problems had been two famous unsolved problems in the field, and Dantzig had effectively solved them.

        While perhaps embellished (and not the only example of students solving 'unsolved' problems), it has become a parable for showing the power of unprejudiced thought. Occasionally it is mis-stated that he invented the simplex method to solve these problems, when in fact, he did that a couple of years afterwards.
        • You happen to have any of the additional examples of this you mentioned handy?
          • by ChadN ( 21033 ) on Wednesday May 28, 2003 @10:48PM (#6064575)
            Another famous one in the computer science field, was Huffman coding. As a graduate student, David Huffman (and the class) were early on given the choice of taking the final or doing a report on one of a few problems the professor outlined. The problems seemed straightforward, but the professor (Robert Fano of M.I.T., I believe) had assigned them because were unsolved problems; the idea being that the student would do the research, figure out that they were hard problems, and give a report on what research had been done to try and solve them.

            Huffman didn't do the research, just spent weeks thinking about it, and eventually, when he had decided to give up and study for the test, the answer came to him and he realized the solution was really quite trivial (the famous "greedy algorithm" for constructing optimal minimum redundancy codes).

            On the day of presentation, Huffman strode to the board, described the problem to the class, and totally unaware that it had not previously been solved (despite much effort), wrote the simple algorithm for constructing the codes on the chalkboard.

            The professor was, understandably, stunned.
      • by zog karndon ( 309839 ) on Wednesday May 28, 2003 @01:16PM (#6059198)
        The interesting thing to me is that according to theoretical analysis, the simplex method ought to have exponential complexity in the worst case. (Hence, all the fuss about Karmakar's algorithm, which has provably polynomial complexity.) But for almost all problems, the simplex method actually has polynomial complexity. What the researchers have discovered is why simplex runs as well as it does.
    • by p2sam ( 139950 ) on Wednesday May 28, 2003 @01:12PM (#6059149)
      Disclaimer: IANAM (I am not a mathematician)

      The simplex method solves a class of problems called "Linear Programming" or simply "LP". Many different kinds of network or graph theory problems can be phrased as an LP.

      An LP consists of an objective function, and a number of linear contraints. For any given LP, there are 3 possibilites:

      1. there does not exist a feasible solution
      2. the LP is unbounded
      3. there exist an optimal solution.

      The goal is to determine an assignment to the variables in the linears system so that the objective value is maximized (or minimized) while satisfying every linear constraint.

      There are other algorithms for solving LP, such as the cutting-plane algorithm. But the simplex algorithm exhibits many useful properties over other algorithms. For example, if the solution of a linear system has already been computed, and if the system changes slightly, one can compute a new solution quickly from the old solution, instead of recomputing from scratch. This is obviously quite useful for continuously updating routing tables on a network.

      Some examples of LP are:

      - single source shortest path (think routing)
      - maximum st-flow
      - minimum cost flow
    • This might be considered redundant, but this is the only explanation of the simplex method I can comprehend. IANA Linear Programmmer, so I may be wrong. Bear that in mind.

      The simplex algorithm is a way of solving Linear Programming problems. Linear Programming problems require you to find an optimal solution for a series of constraints.

      An example might be:

      You are a baker, and you have 20 pounds of flour and a dozen eggs. You can make either loaves of bread (requiring four pounds of flour each) or c
  • So sad... (Score:5, Insightful)

    by Anonymous Coward on Wednesday May 28, 2003 @12:21PM (#6058608)
    Does anyone else here think that it's sad that the number of comments for this story, which represents a significant breakthrough in mathematics and information theory, is less than 5 whereas the once-every-three-days stories about how Microsoft is screwing over their customers or some newfangled thing for Linux has been released always generates 100s of comments?
    • by Piquan ( 49943 ) on Wednesday May 28, 2003 @12:50PM (#6058912)

      I do. That's why I posted this comment... to bring the count up to 5.

    • Re:So sad... (Score:4, Informative)

      by Randolpho ( 628485 ) on Wednesday May 28, 2003 @12:54PM (#6058951) Homepage Journal
      I wish I had mod points so I could mod you up. I agree. However, I think the problem is that this was posted to developers.slashdot.org and not to slashdot.org. 90% of the people on slashdot will never see this article.
      • However, I think the problem is that this was posted to developers.slashdot.org and not to slashdot.org. 90% of the people on slashdot will never see this article.

        That and the subject matter is so specific and technical that most nerds (much less ordinary people) can't follow it. Really it's good reading for a nerd's nerd. I have to admit that I can't understand it.

        I'm not WORTHY!
        I'm not WORTHY!
        I'm not WORTHY!

    • most of us working on a Masters in Mathematics are too busy to comment, sorry.
    • Hell, the only reason I read the topic was I though it was about simplex locks...if I has only known...now my friggin' head hurts!
    • Not that many undergrad CS students are exposed to the wonders of linear programming and simplex. So, its not altogether surprising that it does not attract that much interest.

      Another factor may be that until you get a ways into advanced problems most of the applications linear programming and the simplex method are pretty much cookbook these days - but most courses in linear programming are a semester long - so its hard to tell a cs major that taking a course in LP is more important than taking a course

    • So many people post to the M$ sucks articles because that is the opinion of the masses here. (stating the obvious, I know). But frankly, not many of us can sit down, read 80 pages of graduate level mathematics, and have a strong opinion on this.

      Maybe this says something about the "Me hate Microsoft *grunt*" mentality, but you can't expect 300+ posts on something this complexity and relative obscurity. If you think it's hard to talk to the general public about OSS and computing topics, try explaining

      • This is going to sound awful so I'm just going to say it and not try to finesse it.

        You don't need to know anything to bash MS. Partially it's because there's not a lot there, mostly it's because they are in your face every day of every week, and the rest is because everyone has an opinion.

        On the other hand, this stuff isn't opinion driven and once the algorithm is embedded in a Cisco router no one needs to know anything about it.

        Maybe slashdot needs a section like Fifty years ago today

  • by garyebickford ( 222422 ) <gar37bic@@@gmail...com> on Wednesday May 28, 2003 @01:24PM (#6059293)
    The simplex method is widely used in Operations Research and many classic 'computing' problems. For example, IIRC the "Shortest Route" problem is managed (solved is too strong a word, for reasons explained in the previous post). For instance, if your business has three warehouses, 20 trucks and 10 major customers, how do you determine the cheapest way to supply all the customers with the goods they need? The 'best result' often seems counterintuitive.

    It's also used by the airlines to figure out how to schedule planes. There are uses in physics, social sciences, etc. I don't know any offhand, but I wouldn't be surprised if at one time it was used to assist in scheduling computing resources. It's also widely used in complex pricing models.

    It's also a way to decide, given 10 women and 10 men (or any number), which ones are most likely to get along with each other based on their preferences and characteristics. :O)
    • Linear programming has become a necessary part of basic mathematical literacy. Most secondary-level Algebra 2 books include it with some rudimentary graphical methods for solving problems. I'd be surprised if most slashdotters didn't have some knowledge of linear programming.
      • Linear programming has become a necessary part of basic mathematical literacy. Most secondary-level Algebra 2 books include it with some rudimentary graphical methods for solving problems. I'd be surprised if most slashdotters didn't have some knowledge of linear programming.

        I think math is more like law. Rather then trying to learn it all and remember it for 30 years after the fact, one should know the basic concepts, but be able to contact a consultant for the times they actually need something specif
  • by kgp ( 172015 ) on Wednesday May 28, 2003 @01:28PM (#6059328)
    For those of you interested in the pre-print of the 84 page paper you need not wait for JACM to publish it.

    "Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time" by Daniel A. Spielman and Shang-Hua Teng

    http://arxiv.org/abs/cs.DS/0111050 [arxiv.org]

    Quote (from the intro):

    We propose an analysis that we call smoothed analysis which can help explain the success of many algorithms that both worst-case and average case cannot. In smoothed analysis, we measure the performance of an algorithm under slight random perturbations of arbitrary inputs. In particular, we consider Gaussian perturbations of inputs to algorithms that take real inputs, and we measure the running times of algorithms in terms of their input size and the variance of the Gaussian perturbations.

    We show that the simplex method has polynomial smoothed complexity. The simplex method is the classic example of an algorithm that is known to perform well in practice but which takes exponential time in the worst case[KM72, Mur80, GS79, Gol83, AC78, Jer73, AZ99]. In the late 1970's and early 1980's the simplex method was shown to converge in expected polynomial time on various distributions of random inputs by researchers including Borgwardt, Smale, Haimovich, Adler, Karp, Shamir, Megiddo, and Todd[Bor80, Bor77, Sma83, Hai83, AKS87, AM85, Tod86]. However, the last 20 years of research in probability, combinatorics and numerical analysis have taught us that the random instances considered in these analyses may have special properties that one might not find in practice.

    • by DaoudaW ( 533025 ) on Wednesday May 28, 2003 @02:08PM (#6059718)
      From the paper:
      Another way of thinking about smoothed complexity is to observe that if an algorithm has low smoothed complexity, then one must be unlucky to choose an input instance on which it performs poorly.

      If my first scan of paper yielded anything worth reporting it's the idea that even though a worst case may take exponential time to solve, there is always a neighborhood around the worst case that can be solved in polynomial time and has (almost) the same solution. In other words you can always find a good enough solution quickly.

      BTW, those of you who described the idea as largely philosophical were wrong! The paper is mathematically rigorous. It uses the 80 pages to develop the theorems necessary to prove the idea.
      • Does this mean we should randomly perturb our quicksort inputs to help them sort in polynomial time? That intuitively makes sense. For a worst case quicksort input (typically a reverse sorted input), you could randomly permute the values to makes it O(N log N) rather than O(N). Cool.
        • Just one thing. O(N) is less than O(N log N) so I assume that you meant the worst case of quicksort is O(N^2). I never studied its complexity, so I really don't know.

          This could work, I believe, but only if Quicksort exhibits the same sort of lets say "poly time affinity" (for lack of a better description) that the Simplex method does. Not all algorithms have this characteristic.

          It is worth looking into, though. If this is (statistically) true, one could try to solve in paralell 3 or 4 "pertubed" quicksort
        • Yeah, I think this is actually done quite commonly when a semi-orderly input is expected.
    • It sounds pretty similar to what Smale and others proved decades ago, that the Simplex algorithm's running time is polynomial except on a measure-zero set of problem instances.

      Smale, Blum et al have a really good book called "Complexity and Real Computation" which analyzes the Simplex Method and other algorithms at some length. It tries to build up a theoretical foundation for complexity analysis of numerical algorithms (where the objects being computed are real numbers), just like traditional complexi

  • by jd ( 1658 ) <imipak AT yahoo DOT com> on Wednesday May 28, 2003 @01:33PM (#6059395) Homepage Journal
    Oh wow! This takes me waaay back, to when I was an undergraduate.

    Simplex, for those who aren't familiar with it, is a method of solving linear inequalities by representing the inequalities as a set of vectors which describe the outer bounds of the valid problem space. All space within those bounds is a "valid" solution to the inequalities.

    Simplex assumes that some of the inequalities are contradictory. ie: that improving one variable will worsen one or more others.

    The method works by starting off in some corner, and then progressing round the outer bounds until an optimal solution is achieved.

    Operational Research is the science of applying the Simplex method to real-world problems. Early uses of OR (and, indeed, where the name originates) were in World War II, where the problem was to commit the fewest possible resources to achieve the greatest possible result with the fewest possible Allied casualties.

    (Too few resources, and the enemy would likely be able to inflict more damage. Too many resources would reduce that damage, but would also reduce the number of operations you could perform.)

    Modern uses of OR include production of prefabricated components from some material M, such that you get the most components out, and maximise the amount of M that is usable for some other production work, rather than having it as waste, while (at the same time) keeping additional production and processing costs below the savings made from more efficient use of the material.

    In this case, the number of components (N) is one inequality. You need to equal or exceed the ordered number.

    M is also an inequality - you want to order strictly less than you were ordering before, using the old process, or you've gained nothing.

    M' (the usable remainder) is an inequality, equal or greater than 0 and less than M - W.

    W (the waste) is the fourth inequality, which is greater than 0 and less than M - M'.

    If the cost per unit M is C, and the amount of M needed before applying the Simplex method is I, then your savings are (I - M) * C.

    This gives us the final inequality, where P (the increase in cost, due to increase in complexity) must be strictly less than (I - M) * C.

    Without OR, these inequalities are horribly complicated, and "good" solutions are very hard to find. So most companies who aren't familiar with OR just don't bother. Such companies are easy to spot - the only way they can cut costs is to cut workforce.

    Those companies with a good OR team can often make significant savings by improving the methods used. Such companies don't downsize when the going gets tough. Often, they'll simply revamp their methods and discover they can get more output for less cost, for the same labor force. These companies do brilliantly during recessions, as they can literally rob competitors of the remaining market, by out-producing and under-cutting anything that companies with poorer designs can do.

    You can see from that that VERY few companies use OR in their day-to-day practices. The number of layoffs, blamed on "restructuring" but really the result of restructuring the wrong thing, has been horrendous.

    OR isn't the perfect solution to all problems, and is only really designed to solve linear inequalities, but it's the best method out there. And it's about time it was understood.
    • let me take the opportunity to advertise my favorite. research paper. evar.

      [nec.com]
      Pessimal Algorithms and Simplexity analysis


      Only tangentially related. but good.

    • by Anonymous Coward
      You seem to be under the misconception that what you saw in you undergraduate "introduction to OR" class is all there is to OR. To wit, linear programming is only one method applied in operations research, the science of using rigorous mathematical models to improve on company operations. It is true that in the 1950's linear programming was the primary tool in OR due to its efficacy, but the field has moved on since then. The subfield I'm involved in is combinatorial optimization of supply chains (produ
    • ``OR isn't the perfect solution to all problems, and is only really designed to solve linear inequalities, but it's the best method out there. And it's about time it was understood.''

      If the mathematician screams in the forest of managing suits, will anyone hear?
  • Simplex (Score:2, Funny)

    by SporkLand ( 225979 )
    What a poorly chosen name. Simplex is pretty complex when compared to some of the other things I studied in math class. I always wondered why they called it simplex.
    • Re:Simplex (Score:3, Interesting)

      by Cardbox ( 165383 )
      It's a mathematical term. In 2 dimensions, a simplex is a triangle (3 points connected in all possible ways - ie. by 3 lines). In 3 dimensions, a simplex is a tetrahedron (4 points connected in all possible ways - ie. by 6 lines). In 10 dimensions, a simplex is 11 points connected in all possible ways - ie. by 11x10/2 lines.
    • Uh... Simple ^ Complex == Simplex

      It appears complex while it is actually quite simple. ;)
  • not news (Score:1, Offtopic)

    by Horny Smurf ( 590916 )
    I discovered the secret of the Herpes Simplex 2 years ago :(

  • what else can be said. I'm always amazed at all the complexity that is involved in modern computing and the mathematics that make it happen.
  • by Animats ( 122034 ) on Thursday May 29, 2003 @01:54PM (#6069566) Homepage
    That's a neat result.

    It's been known for a long time that the simplex method is polynominal most of the time, and exponential in the worst case. It's also known that the exponential cases are metastable - a small perturbation in the inputs and they collapse to a polynominial case. So adding a little noise to the algorithm can kick it out of the bad cases.

    But that's been an ad-hoc result. There hasn't been theory on how much noise to add, and when, and how. With sound theory underneath, algorithms that use noise injection to get out of the pathological states may become much more reliable.

    Polyhedral collision detection, as used in games, works a lot like the simplex algorithm, and there are some pathological cases. The current best solution, "Enhanced GJK", is adequate but still has trouble in some "pathological cases". There are ways out of those difficulties, but they're inelegant. This new work might lead to a cleaner algorithm in that area.

    There are other algorithms with similar properties, where the worst case is far slower than the average case. The simplex algorithm is for linear optimization. Many of the same difficulties appear in nonlinear optimization, where everything is more complicated and the methods all have severe restrictions. This may have applications in physics engines for video games, but it's too early to say.

  • Say, does anybody know what ever happened with Karmarkar's algorithm? The early press releases when it came out said it was going to revolutionize Operations Research (specifically replacing Simplex). It doesn't appear this has happened. What's the scoop?
    • by Anonymous Coward
      In practice, simplex is so good that it still wins on many problems. However, all serious modern linear programming libraries also do interior point algorithms, because they are faster in some cases, especially when the problem is large. But speed is not everything -- even when the methods are about equally fast, simplex has some practical advantages, such as the ability to modify the problem slightly and continue efficiently from the solution of the original problem. This is important in integer program
  • Popularity? (Score:1, Insightful)

    by RAMMS+EIN ( 578166 )
    Never underestimate the power of stupid people in large groups. This is why it's a Good Thing that hardly anything is really democratic; the masses just aren't interested and thus ignorant.

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...