Forgot your password?
typodupeerror
Programming Bug Technology

When Bugs Aren't Allowed 489

Posted by CowboyNeal
from the absolutely-positively-perfect dept.
Coryoth writes "When you're writing software for an air traffic control system, military avionics software, or an authentication system for the NSA, the delivered code can't afford to have bugs. Praxis High Integrity Systems, who were the feature of a recent IEEE article, write exactly that kind of software. In "Correctness by Construction: A Manifesto for High-Integrity Software" developers from Praxis discuss their development method, explaining how they manage such a low defect rate, and how they can still maintain very high developer productivity rates using a more agile development method than the rigid processes usually associated with high-integrity software development."
This discussion has been archived. No new comments can be posted.

When Bugs Aren't Allowed

Comments Filter:
  • by demonbug (309515) on Thursday January 05, 2006 @08:57PM (#14405792) Journal
    probably helps too :P
    • by GileadGreene (539584) on Thursday January 05, 2006 @09:14PM (#14405886) Homepage
      And yet the reports I've seen on Praxis claim costs and schedules the same or less than the development of software of similar complexity...
      • I'm sure one could point to, for example, Fog Creek software as another example of somewhere that does a remarkable job with small teams.

        The key point is this: small teams. It's a lot easier to find the people who can produce 10x better (in terms of rate of writing, clean/bug free code, whichever metrics you care for) when you need to find 3 or 5 or 10 people. You can't staff a whole large application development project with the best gurus: there aren't enough out there in the world.
        • by Coryoth (254751) on Thursday January 05, 2006 @11:11PM (#14406504) Homepage Journal
          Yes, small teams help, but I think it really is worth taking a look at the tools and methods that Praxis uses because there are some remarkably good ideas in there. Take a look at SPARK Ada - the Wikipedia article [wikipedia.org] has some basic examples, and Praxis provides sample chapters of a book on SPARK [praxis-his.com] for download. Seriously, take a little time out of your day, sit down, and read a little about how they do what they do. There really are good ideas that go well beyond just "small teams" if you want to deliver correctly functioning code.

          Jedidiah.
        • by SeaFox (739806) on Thursday January 05, 2006 @11:29PM (#14406563)
          You can't staff a whole large application development project with the best gurus: there aren't enough out there in the world.

          And why aren't there? Geeks lament the fall of IT and Computer Science programs at institutes of higher learning, and wonder why people don't want to go into these fields. If there was demand for programmers of the caliber you mention and companies willing to pay salaries deserving of such abilities there would be more people studying towards such a position.

          But if companies are going to just throw up their hands and say "we can't hire competent people, there aren't enough of them in the world, they only doom themselves to a continued shortfall in talent, and an increase in buggy software.

          Nursing is a profession that is always looking for new people and folks who didn't make it through the four-year grind back when they were young are flocking to it because the jobs are waiting. If companies held their coders to a higher standard, they would trim some of the fat in their projects and have jobs open for people willing to do the work. The result would be more productive teams and better applications, a win-win situation.
          • But if companies are going to just throw up their hands and say "we can't hire competent people, there aren't enough of them in the world, they only doom themselves to a continued shortfall in talent, and an increase in buggy software.

            What they really meant was "we can't hire competent people at the same prices as Jimmy the mounth-breather." Take that as you will.

          • by georgewilliamherbert (211790) on Friday January 06, 2006 @12:31AM (#14406872)
            While it's hard to argue with the above logic, such changes don't happen overnight just because a demand increases. When we're talking about properly educating people (and by properly, I mean both intensive CS background in theory and practice, and then practical coding in real world environments in quantity to become a good coder as well), it will take five or more years for a demand spike to produce a supply rise.

            The projects I am involved with today won't wait five years for programmers. This is business fact; if they aren't started now, and completed in an economical timeframe, then companies will go out of business and not be around to hire those better programmers in 5 years.

            If you shift the question around to "should we set project and programmer standards" and "should standards improve and evolve over time", and the statement to "current standards are inadequate and irresponsible" then sure, no disagreement.

            In many cases, development problems are the result of not even following what industry standards and best practices are in place now.

            And one thing I don't want to see is formal programming CS programs which produce CS professors exclusively ... though some of my CS academic aquaintences dislike me saying so, from what I see, few of their graduates go into coding in industry. That's a pretty unfortunate thing for the world. To be ultimately useful, these skills need to become things which take people out of college, into industry, and successfully into challenging industry projects.

            Nontrivial problems to solve. If it was easy, everyone would be doing this stuff right already. That obviously isn't true yet...
            • And one thing I don't want to see is formal programming CS programs which produce CS professors exclusively ... though some of my CS academic aquaintences dislike me saying so, from what I see, few of their graduates go into coding in industry. That's a pretty unfortunate thing for the world. To be ultimately useful, these skills need to become things which take people out of college, into industry, and successfully into challenging industry projects.

              Maybe we could start by not assuming that CS grads shoul

          • In order to get the more talented people, companies need to be willing to pay more. It's a chicken vs the egg problem. They can't get the proof that it's money well spent until after they spend the money, so they don't believe it's possible.

            In large organizations, there are already a lot of mediocre people and policies built to deal with that reality. People who know what they're doing face an uphill battle in getting anything done. There are large beaurocracies in place to tell people what standards th
          • by lp-habu (734825) on Friday January 06, 2006 @10:40AM (#14408787)
            You can't staff a whole large application development project with the best gurus: there aren't enough out there in the world.

            And why aren't there?

            For the same reason there aren't enough really good NFL quarterbacks for every team to have one, despite the money that is spent in trying to find them.

            People differ in ability in every field; the bell curve is real, and only the people who are at the high end of the curve can be considered one of "the best gurus". They will never constitute a large percentage of the group. Ever. Furthermore, there is usually a huge difference in performance between people who are in the top 10% of their field and those who are in the top 0.1% of their field. Most people would consider those in the top 10% as "the best gurus", but really it's only that tiny segment at the very top who deserve the appelation. Even then, you can expect a marked difference between those in the to 0.1% and those in the top 0.01%. Fact of life, folks.

          • by zCyl (14362)
            If there was demand for programmers of the caliber you mention and companies willing to pay salaries deserving of such abilities there would be more people studying towards such a position.

            But if companies are going to just throw up their hands and say "we can't hire competent people, there aren't enough of them in the world, they only doom themselves to a continued shortfall in talent, and an increase in buggy software.


            Part of the problem is simply that the higher you draw the threshold, you start to get i
          • by bill_kress (99356) on Friday January 06, 2006 @02:51PM (#14410755)
            What a lot of people don't realize is that being a Guru is an art.

            What's the quickest way to paint the roof of the Sistine chapel? Are you going to be able to hire 30 artists with enough talent, or should you stick with the one that is qualified and a couple assistants and just wait a few years?

            Can you train 30 artists to be good enough to do the work? How about 300?

            After a point, being a super-coder is just as much of an art. You won't be able to produce these people, it's kinda in their soul. Great musicians pick up their first instrument and know it's what they are going to do--what they are made for. My guess would be that if you have had access to a computer for over a year and you aren't coding yet, you'll probably never be a really great coder--a real computer artist couldn't have resisted.

            Hmm, maybe a better word that Guru or Architect would be Computer Artist or Code Artist? It should convey the relative rarity much better.

            This should be obvious. Every other art has it's gurus, and they are usually the top 1%, 99% of the others in the field simply will never be able to do what the gurus do, regardless of training or experience. I'll never play the piano like a savant that started at age 3, period.
        • Rules of thumb (Score:4, Insightful)

          by jd (1658) <imipak&yahoo,com> on Friday January 06, 2006 @12:17AM (#14406796) Homepage Journal
          1. Doubling the number of coders will double the number of bugs and double the total time
          2. Doubling the budget will double the number of coders
          3. Extendable projects will, deadlines kill
          4. There is no such thing as a potential bug
          5. "Good" methods are cheap for customers, sloppy methods are profitable for companies.


          In the end, software companies are in it for the profits. They have no lemon laws to respect, they have no trades description act to obey, no ombudsmen to answer to, no consumer rights groups to speak of, no Government-imposed standards certification and virtually no significant competition. Customers are often infinitely patient and completely ignorant of what they should be getting - the machines are like Gods and the software salesmen are their High Priests. To question is to be smote.


          Were standards to be mandated - perhaps formal methods for design, OR quality certification of the end result, you would see no real impact on net software costs. Starting costs would go up, but long-term costs would go down.


          Nor would you see any serious impact on variety - if anything, there is a greater range of car manufacturer and design today than there was in the 50s and 60s when cars had the unnerving habit of exploding for no apparent reason.


          What you'd see is a decline in stupid bugs, a decline in bloat, an increase in modularity, possibly a reduction in latency and a move from upgrades to fix things that SHOULD have worked in the first place to enhancing things that can be relied upon to CONTINUE working fter the patches.


          Money would not be made by selling the same product with a different set of defects to the same market, money would be made by always going beyond last year's horizons. The same way most manufacturers, from cars to camping gear to remote control aircraft to air conditioning units to microwave ovens to home stereo manufacturers have all been doing - very successfully - for a very long time.


          The IT industry isn't going to change in the foreseeable future, the only way we'll see change in our lifetimes is if it is imposed on the Pointy Haired Bosses. We could easily see 99.9% reliable software, with no additional cost, in our homes in a year, with the lack of constant fixes actually saving money. And that's why it won't happen. Not because the IT corporations are mean, thuggish and ogreish - they are, it just isn't way it won't happen.


          It won't happen because they're geared both towards the profit motive and towards the outdated notion that the market is tiny. (That last part was true - in the 1950s, when entire countries might have three or four computers in total, operating in two, maybe three different capacities. You can understand a desire to go after the after-sales service, when there simply isn't anything else left to do.)


          Today, Microsoft's Windows resides on 98% of the desktop computers, but because of the support system needed to run the damn things, 98% of the world's population didn't have significant access to one. Ok, putrid green is a lousy colour, but the idea of clockwork near-indestructible laptops that - in theory - could be built to weigh 5 lbs or less and run high-end, intensive applications is beginning to filter through to the brain-dead we call politicians.


          You think someone in the middle of Ethiopia who is fluent only in their native tounge is going to want to pay for telephone technical support from someone in India, in order to figure out why their machine keeps locking up?


          When computing is truly available to the masses (ie: when even a long-forgotten South American tribe can reasonably gain access to one), the ONLY way it can be remotely practical is if said South American can look forward to a reliable, usable, practical experience where all usage can be inferred from first principles, and where NO software service calls are required to get things to work, ONLY required to get more things for working with.

    • by User 956 (568564)
      nearly unlimited funding probably helps too

      The old technology axiom applies:

      High Speed, Low Cost, High Quality.

      Pick 2 out of 3.
    • by Coryoth (254751) on Thursday January 05, 2006 @09:25PM (#14405953) Homepage Journal
      In fact this is the whole point - Praxis manages to deliver software with several orders of magnitude less bugs than is standard in the software industry, but does so in standard industry time frames with developer productivity (over the lifecycle of the project) on par with most non-critical software houses. Praxis does charge more - about 50% above standard software daily rates - but then when you are getting the results in the same time frame with massively less bugs a paying little extra is worth it... you'll likely save money in the support and maintenance cycle!

      Jedidiah.
    • So... (Score:3, Insightful)

      by Coppit (2441)
      nearly unlimited funding probably helps too :P
      I guess that's why Microsoft's software is so good.
    • Its true - I've been in software quite some time (currently work for a company every man woman and child has heard of). Most bugs are classified on time it requires to fix, risk of fixing the bug (most of the time you want to avoid fixing something only to create two more issues), how critical the issue is and mainly > budget availble to fix bugs.

      At any rate - there's no such thing as bug free software. Never will be. There is such a thing as a product that appears to work without bugs and I think thats
      • by kimba (12893) on Friday January 06, 2006 @06:50AM (#14407988)
        At any rate - there's no such thing as bug free software. Never will be.

        10 PRINK "HELLO WORLD"

        Damn.
        • Almost. What if standard out is a pipe that got closed? What if there's not enough memory to run the basic interpreter? What if it gets a kill signal from the kernel? What if it becomes a zombie process?

          As you can see, even the simple 10 PRINT "HELLO WORLD" isn't bug free. To make bug free software you don't need to catch most errors, you need to catch every possible error.
    • by goombah99 (560566) on Thursday January 05, 2006 @09:34PM (#14406005)
      unlimited risk can be an incentive too.

      Professor Middlebrook at caltech was an innovator in an unusual field. Sattelite electronics. Since no repairman was coming they wanted robust electronics. He desigined circuits in which any component could fail as an open or a short and it would remain in spec. You know that's a remarkable achievement if you've ever desinged a circuit before. Notably you can't really do this using SPICE. Speice will tell you what comething does but not how to design it. To do that you need a really good sense of approximations of the mathematical formula a circuit represents to see which components are coupled in which terms. And you need one more trick. The ability to put in a new element bridging any two points and quickly see how it affects the cicuit in the presence of feedback. To do that he invented the "extra element theorem" which allows you to compute this in analytic form from just a couple simple calculations. They still don't teach this in stardard courses yet. You can find it in Vorperians text book, but that's it. If you want to learn it you gotta either go to the original research articles from the 70s.

    • See also this [slashdot.org] comment, from further down the page.
  • You slashdotted stsc.hill.af.mil!
  • by ChePibe (882378) on Thursday January 05, 2006 @09:00PM (#14405809)
    Uh... it's going to be kind of hard for the NSA to do its job without bugs, isn't it?

    *rimshot*
  • Whatever (Score:5, Insightful)

    by HungWeiLo (250320) on Thursday January 05, 2006 @09:00PM (#14405811)
    When you're writing software for an air traffic control system, military avionics software, or an authentication system for the NSA, the delivered code can't afford to have bugs

    I've been in this industry for quite some time and let me be the first to say that I wish I could repeat this sentence with a straight face.
    • Here, here... (Score:5, Interesting)

      by crimson30 (172250) on Thursday January 05, 2006 @09:16PM (#14405898) Homepage
      When you're writing software for an air traffic control system, military avionics software, or an authentication system for the NSA, the delivered code can't afford to have bugs

      I've been in this industry for quite some time and let me be the first to say that I wish I could repeat this sentence with a straight face.


      That was my first thought, particularly with military avionics. A few years ago they put out a hardware/software update for the ENS system (Enhanced Navigation System) which led to frequent crashing... and it took over a year for them to come out with a message saying that it was a bug and not to waste countless man hours trying to repair it.

      It's sort of a new concept, though, as I'd never really seen such problems with traditional avionics systems (non glass-cockpit stuff). I've always attributed it to people being used to the behavior of MS Windows. And I'm not saying that to start a flamewar. I'm serious. Unreliable avionics systems should be unacceptable, but these days, that doesn't seem to be the case.
      • by Raul654 (453029) on Thursday January 05, 2006 @09:18PM (#14405916) Homepage
        Could you clarify here. When talking about bad guidance software for planes, "crashing" is an ambigious term ;)
      • Er... obviously, I meant: "Hear, hear...".
      • Re:Here, here... (Score:5, Insightful)

        by drew (2081) on Thursday January 05, 2006 @09:34PM (#14406002) Homepage
        I've always attributed it to people being used to the behavior of MS Windows. And I'm not saying that to start a flamewar. I'm serious. Unreliable avionics systems should be unacceptable, but these days, that doesn't seem to be the case.

        Many years ago, I remember reading a quote from an employee at a major aircraft subcontractor along the lines of "If my company paid as much attention to the quality of our work as Microsoft, airplanes would be falling out of the sky on a weekly basis, and people would accept this as normal." I've heard many people, even programmers, claim that bugfree programs are impossible to write. They are not- they just cost far more in time and money than most companies can afford in this commercial climate. When success depends largely on being first to market and bugs and crashes are accepted as a normal fact of life, then they always will be a normal fact of life.

        Unfortunately, I think the blame lies at least in large part with the consumer. As long as people put up with programming errors in a $500 software suite that they would never accept in an $80 DVD player, we will continue to have these problems. Unfortunately, too many people still consider computers to be too much black magic that is out side of their (or anyone else's) grasp. Most people have little to know knowledge of how their car works under the hood, but they still believe that the engineer who designed it has enough knowledge to do it without making mistakes and expect the manufacturer to pay for those mistakes when they happen. Why should they believe any differently about the people who write the software they use?
    • Re:Whatever (Score:4, Informative)

      by Coryoth (254751) on Thursday January 05, 2006 @09:21PM (#14405925) Homepage Journal
      I've been in this industry for quite some time and let me be the first to say that I wish I could repeat this sentence with a straight face.

      I was pitching for "how people would like to think things are" rather than how things actually work. In practice Praxis, at least, does deliver such software, and does so with extremely low defect rates. They are proof that it can be done, even if it isn't always how things work now.

      Jedidiah.
    • When you're writing software for an air traffic control system, military avionics software, or an authentication system for the NSA, the delivered code can't afford to have bugs

      So a few bugs in commercial avionics is acceptable?

    • by iluvcapra (782887) on Thursday January 05, 2006 @09:49PM (#14406087)

      TFA cites a particular NSA biometric identification program which has "0.00" errors per KSLOC.

      Now, this got me thinking. It is completely possible for a biometric identification program to identify two different individuals as the same person (like identical twins), or for it give a false negative identification (dirt on a lense, etc). Is this a bug? The code is perfect: no memory leaks, the thing never halts or crashes or segfaults, all the functions return what they should given what they are.

      I think the popular definition of "bug" tends to catch too many fish, in that it seems to include all the behaviors a computer has when the user "didn't expect that output," what a more technical person might call a "misfeature." TFA outlines a working pattern to avoid coding errors, not user interface burps -- like for example, giving a yes/no result for a biometric scan, when in fact it's a question of probabilities and the operator might need to know the probabilities. Such omissions (the end user would call this a 'bug'), are solved thru good QA and beta-testing, but TFA makes no mention of either of these things, and seems to think that good coding is the art of making sure you never dereference a pointer after free()'ing it. It does mention formal specification, but that is only half the job, and alot of problems only become clear when you have the running app infront of you.

      Discussion of TFA has its place, but it promises zero-defect programming, which is impossible without working with the users.

    • I second the motion! (Score:5, Interesting)

      by Jetson (176002) on Thursday January 05, 2006 @11:44PM (#14406627) Homepage
      I've been in this industry for quite some time and let me be the first to say that I wish I could repeat this sentence with a straight face.

      I've been a controller for 13 years and have worked in the automation end of things for almost 4 years now. There is NO SUCH THING as bug-free Air Traffic Control software. The best we can hope for is heterogenous redundancy and non-simultaneous failures. Some engineers seriously think they could replace all those controllers with an intelligent algorhythm. What really scares me is that the more they try, the less engaged the people become and the harder it is for them to fall back to manual procedures when the worst happens.

      Everyone used to laugh at how Windows NT could only run for 34 days before it needed a reboot. Some of our systems can't run HALF that long without needing a server switch-over or complete cold-start.

  • Still can have bugs (Score:4, Interesting)

    by Billly Gates (198444) on Thursday January 05, 2006 @09:03PM (#14405829) Journal
    The only method I have seen with almost perfect reliability is where the inputs and outputs are overloaded to handle any datatype and can be proven mathamatically not to crash. I guess a CS degree is still usefull.

    The problem is to obtain it you need to write your own libraries and not use ansi or microsoft or any other products as you can not see or trust the source code.

    If you can prove through solid design and input and output types that the program wont lose control then your set. Its buffer overflows and flawed design that has not been tested with every concievable input/output that causes most serious bugs in medical and aerospace applications.

    However in practice this challenge is a little unpractical when deadlines and interopability with closed source software get in the way.
    • === If you can prove through solid design and input and output types that the program wont lose control then your set. ===
      Well, if that were the case your program would never crash on input, but it could still take that input data and make an incorrect calculation on it. Add the difference between the airport's height and sea level, for example, rather than subtracting it.

      sPh

    • by GileadGreene (539584) on Thursday January 05, 2006 @09:20PM (#14405923) Homepage
      Did you actually bother to RTFA? Oh yeah, this is /. - stupid question. Allow me to suggest that you make an exception, and actually take the time to read TFA in this case. Praxis does not claim no bugs, they claim significantly lower defect rates than development using other methods. Which in turn means much greater confidence that the system will work as desired.

      No one can ever make something that is completely "bug-free" - even in traditional, non-software disciplines. All you can do is make the probability that the system will work as high as possible. Praxis has some techniques that can help developers create software with a much higher probability of working correctly than it would otherwise have. That's a good thing, even if it doesn't result in perfection.

      Its buffer overflows and flawed design that has not been tested with every concievable input/output that causes most serious bugs in medical and aerospace applications.

      It's the fact that Praxis relies on static checking far beyond anything you've ever seen (using a carefully designed subset of Ada that can be statically checked in all sorts of interesting ways) that helps to ameliorate this problem, since the static check is effectively equivalent to an exhaustive input/output test.

    • by Coryoth (254751) on Thursday January 05, 2006 @09:38PM (#14406029) Homepage Journal
      If you can prove through solid design and input and output types that the program wont lose control then your set. Its buffer overflows and flawed design that has not been tested with every concievable input/output that causes most serious bugs in medical and aerospace applications.

      Praxis uses a subset of Ada together with annotations in a language called SPARK to write most of their software. They also have tools which work with such code to do considerable static checking - much as type checking catches errors, checking the annotations catches many more just as efficiently - and generate proof obligations, which they can then formally prove. That means, for many of their projects, the actaully have formal proofs that buffer overflows cannot and will not occur.

      However in practice this challenge is a little unpractical when deadlines and interopability with closed source software get in the way.

      Again, this is where the tools and methodology matter. Praxis delivers code as fast as traditional development techniques, so deadlines aren't the problem. They can do this by using SPARKAda and the SPARK tools to do exceptionally robust testing on a regular basis for each incremental deliverable. This allows catching bugs much earlier, when they are cheaper and faster to fix.

      The only method I have seen with almost perfect reliability is where the inputs and outputs are overloaded to handle any datatype and can be proven mathamatically not to crash. I guess a CS degree is still usefull.

      It is pretty much this sort of mathematical rigor, injected into the development process as early as possible, that allows Praxis to produce the sort of defect rates that they do. And yes, that does mean that developers at Praxis are probably required to have stronger math and CS backgrounds that elsewhere. Given that, due to their ability to deliver almost bug free software in very reasonable time frames, Praxis charges 50% more than the industry daily rate, yes having a math or CS degree really does count for something - more money for starters.

      Jedidiah.
  • Can be proven safe. I wonder what subset of modern OS design could be done in such programming languages.
    • Well, it couldn't be able to construct the NAND of two elements, as NAND is turing-complete (for arithmetic).

      After that you need control structures. If you were to not allow control structures, I think it would be very hard to make an OS. (Unless it's one of them "Hello World" OSes, that then crash and reboot.)

      I'd say it's impossible to write an OS in a non-turing complete language.
      • Uh (Score:4, Funny)

        by autopr0n (534291) on Thursday January 05, 2006 @09:33PM (#14405998) Homepage Journal
        control structures != Turing complete. You can have loops as long as they have constant maximum bounds. Whatever it happens to be that you mean when you say "Nand is Turing complete" it makes no sense when you actually typed it. "turing-complete (for arithmetic)." makes no sense at all. WTF? Someone failed CS 315.
    • SPARK Ada, which Praxis use, is actually not turing-complete. It's bounded in space: no dynamic data structures, no allocation or deallocation whatsoever - not even implicitly. That's what permits the serious static checking, and makes correctness proofs feasible. Termination proofs can still be a pain, though. AFAIK most SPARK programs are proven to behave correctly only if they do terminate.
      That's not so relevant in real-time systems, though, which is where SPARK really shines. It's quite an achievement b
  • economics (Score:5, Insightful)

    by bcrowell (177657) on Thursday January 05, 2006 @09:06PM (#14405843) Homepage
    The authors contend that there are two kinds of barriers to the adoption of best practices... First, there is often a cultural mindset or awareness barrier... Second, where the need for improvement is acknowledged and considered achievable, there are usually practical barriers to overcome such as how to acquire the necessary capability or expertise, and how to introduce the changes necessary to make the improvements.
    No, the reason so much software is buggy is economics. Proprietary software vendors have to compete against other proprietary software vendors. The winners in this Darwinian struggle are the ones who release buggy software, and keep their customers on the upgrade treadmill. Users don't typically make their decisions about what software to buy based on how buggy it is, and often they can't tell how buggy it is, because they can't try it out without buying it. Some small fraction of users may go out of their way to buy less buggy software, but it's more profitable to ignore those customers.
    • I contend that the upgrade treadmill with users who accept getting shit upon by their suppliers and smile is a dead end path.
      Most other Industries start in this state (You can have any colour as long as its black), but once something becomes enough of a commodity, inevetably the voice of the customer starts to win out.
      Either they sue you into the stoneage for destroying their billion dollar enterprise with your crap thats not fit for purpose or they begin to understand that defects COST them and purchase fr
    • Re:economics (Score:3, Insightful)

      by ChrisA90278 (905188)
      "No, the reason so much software is buggy is economics. Proprietary software vendors have to compete against other proprietary software vendors."

      No, that's not it either. Bugs happen because the people who buy the software do not demand bug free code. I do write software for a living. When the customer demends bug-free software he gets it.

      I've been around the building bussines too. when I see por work there, say badly set tile, I don't blame the tile setter so much as the full ideot who paid the til

  • Bugs are fine... (Score:5, Insightful)

    by Paladin144 (676391) on Thursday January 05, 2006 @09:13PM (#14405880) Homepage
    Luckily, bugs are just fine [washingtontimes.com] if you happen to run a company that builds voting machines, such as Diebold. And if you think that elections aren't in the same category as air traffic control, I suggest you take a tour of Iraq. Elections are very important for your continued existance upon the earth.
  • by vertinox (846076) on Thursday January 05, 2006 @09:16PM (#14405899)
    Ususually when the software and the phrases "life support" or "nuclear weapons" are together in the same sentence.

  • by david.emery (127135) on Thursday January 05, 2006 @09:16PM (#14405905)
    The Master Money server done by Praxis was done Fixed Price, and with a warranty that says Praxis would fix any bug discovered over the net 10 years -for free-.

    How many of you would be willing to place that kind of warranty on YOUR CODE?

    dave (who's tried SPARK and liked it a lot, although proofs are much harder than they should be...)

  • Why can't automatic verifications systems be used for this? You start with an input set and define the output set. Run a program verification system to make sure the outputs are in the output set and don't go out of it?

    The inputs or outputs could be infinite but in that case use logical constructs to verify it.

    I'm not a researcher or student of this theory. So, maybe someone can illustrate to me why this wouldn't work or be applied to industry?

    • That's what's sometimes done, and it's mentioned in TFA. But, what kind of logical constructs would you use to define the expected output for any input, in a logical manner, instead of just lining them up? Hey, that's the program itself. If you can define exactly what you want in a concise manner, with a way to verify it, then the only remaining problem in the resulting code is performance. (Of course, in practice many of these systems have a realtime requirement to a varying degree.)

      You might be able to s

  • by MLopat (848735) on Thursday January 05, 2006 @09:19PM (#14405918) Homepage
    In the world of software development, there have come to be two defacto models.

    1. Get the software out the door ASAP - quite simply, bang out code as fast as possible that meets a loosely defined specification. Then once the product is adopted, parachute help in like no tomorrow to steadily improve the product.

    2. Engineer the software - not as a simple as it sounds. This requires that a specification be drawn. A plan be prepared. A team of solid engineers formed and lead by a competent manager. Then, throughout the entire development cycle, test and debug code.

    My company does the latter and to do date we have retained 100% of our customers. I'm shocked by the number of developers that approach our company for jobs that don't have the first clue about how to even write a test harness, let alone do any real debugging. Then again, they don't teach much of that stuff in school and it seems that unless your role was specifically in testing at a previous job, that you're not going to have too much experience in that area. Its economics and marketing that put the bugs in software, not computer science.
    • That's how much of the system you control. By system I mean the entire channel, including the hardware, software, input, etc. It's much easier to engineer low defect software when you control all of that. For example if you are developing something that runs only on one particular version of one kind of embedded device, and can only have input given to it in a certian way, and you can gaurentee that it is the only thing running at a given time, and so on.

      Much harder if you are making something that has to r
  • Secure code (Score:3, Interesting)

    by Muhammar (659468) on Thursday January 05, 2006 @09:20PM (#14405922)
    The main obstacle in writing a decent code is usualy the management - their frequent changes of mind (about what they want - which is usualy different from what is helpful to the users) and their "good enough" and "legacy first" attitude. Overreaching ambition is another problem - one needs to limit himself to fewer things to do them well - and the management pressures usualy run in oposite direction. (Salesmanship bullshit never helps, especialy if it starts to influence the direction of your project.)

    • Right, but the article addresses this in point 4:

      4.) Saying things only once. For example, by producing a software specification that says what the software will do and a design that says how it will be structured. The design does not repeat any information in the specification, and the two can be produced in parallel.

      The key to this point, obviously, is to know exactly what you want when you start and have a detailed outline of the components that will enable it.

  • seems kinda small (Score:3, Interesting)

    by khallow (566160) on Thursday January 05, 2006 @09:28PM (#14405977)
    I count less than 400k source code lines among their examples ("SLOC"). Collectively, this is at least an order of magnitude (maybe two or more actually, I don't know) shorter than the really big projects out there. So I guess I have two questions. First, is this rate really good given the size of the projects described? And second, for the huge projects, what sort of bug rates are theoretically achievable?
  • by dlleigh (313922) on Thursday January 05, 2006 @09:36PM (#14406013)
    I was at an X windows technical conference many years ago when someone gave a presentation on X with Ada. When the speaker mentioned that it was for an air traffic control application, there was a sharp intake of breath all around the audience, most of whom had flown in for the meeting.
  • by brucehoult (148138) on Thursday January 05, 2006 @10:03PM (#14406160)
    The site is slashdotted at the moment, so I can't read the article.

    A good example of people writing complex but bug-free software under time pressure is the annual ICFP Programming Contest [icfpcontest.org]. This contest runs over three days, the tasks are complex enough that you usually need to write 2000 - 3000 lines of code to tackle them, and the very first thing the judges do is to throw corner-cases at the programs in an effort to find bugs. Any incorrect result or crash and you're out of the contest instantly. After that, the winner is generally the highest-performing of the correct programs.

    Each year, up to 90% of the entries are eliminated in the first round due to bugs, usually including almost all the programs written in C and C++ and Java. Ocassionally, a C++ program will get through and may do well -- even win, as in 2003 when you didn't actually submit your program but ran it yourself (so it never saw data you didn't have a chance to fix it for). But most of the prize getters year after year seem to use one of three not-yet-mainstream languages:

    - Dylan [gwydiondylan.org]
    - Haskell [haskell.org]
    - OCaml [inria.fr]

    You can argue about why, and about which of these three is the best, or which of them is more usable by mortals (I pick Dylan), but all of them are very expressive languages with uncluttered code (compared to C++ or Java), completely type-safe, produce fast compiled code, and use garbage collection.
    • by Coryoth (254751) on Thursday January 05, 2006 @10:35PM (#14406325) Homepage Journal
      Praxis uses SPARK Ada, which is a subset of the Ada programming language and annotations that provide for extended static checking, and theorem proving. You can find more about SPARK at the Praxis website [praxis-his.com], or the Wikipedia article [wikipedia.org] isn't too bad. It's a very nice language, and has fantastic tool support.

      If you find that interesting, but Ada isn't to your taste, you can try JML [iastate.edu] for Java which provides similar (but lacking quite the same robustness and tool support) annotations. JML lets you automatically generate JUnit tests based on your annotations, and with ESC/Java2 allows for extended static checking.

      If, as you appear, you are more of a fan of functional languages then I'd suggest you check out Extended ML [ed.ac.uk] and HasCASL [uni-bremen.de] which provide similar sorts of formal specification capabilities for ML and Haskell. Tool support for these is still a little limited, but they are both quite powerful and provide very expressive specification syntax.

      Jedidiah.
    • by patio11 (857072) on Thursday January 05, 2006 @10:54PM (#14406419)
      I've participated in the ICFP before (one-man team on Java, program died in the first round, so there are my cards on the table), but one of the reasons the International Conference on Functional Programming Contest is consistently won by Functional Programmers is that it appeals heavily towards them both in terms of getting the word out to people and in terms of task selection. Type safety, fast compiled code, garbage collection -- all of these were all but irrelevant to the last two years' tasks. The main stumbling block both years had been writing parsers. FP is great for both tasks, thats why they teach you Scheme for your compiler design course in college and unless the language stokes your fire you'll never, ever use it again. This does not imply that its the tool for every possible job, and the various languages have features which make them better for a variety of tasks.

      Yesterday, I had to do an analysis for someone on whether eliminating the electoral college would hurt states with a low turnout or not. The data is online in a nice plain text table, and the calculations are dirt-simple and take under a second in whatever language you want. Gawk all the way, got the project done in half an hour, if I had used C or Java I'd probably have spent triple the time for the same results. Several months ago I had to do image processing with a GUI wrapped around it -- C for the number crunching, .NET something or other for the GUI. A year ago I had to write a distributed application to do some crazy intense number crunching -- wrote the number crunching loop in C*, wrote the network code and interface in Java.

      * Credit where credit is due: I borrowed 99.95% of it from GPLed code designed to do the same task on a single machine.

      • by brucehoult (148138) on Thursday January 05, 2006 @11:31PM (#14406569)
        I've participated in the ICFP before (one-man team on Java, program died in the first round, so there are my cards on the table),

        My cards on the table are that I've entered each of the last six years with 3 - 5 person teams using Dylan, collecting 2nd place twice and Judge's Prize twice.

        but one of the reasons the International Conference on Functional Programming Contest is consistently won by Functional Programmers is that it appeals heavily towards them both in terms of getting the word out to people and in terms of task selection.

        Getting the word out doesn't seem to be the problem. Last year for example there were 161 first-round entries. Only 38 entries -- 24% of the total -- were in one of the languages I mentioned as being consistently sucessful: 1 in Dylan, 16 in Haskell and 21 in OCaml.

        I also disagree about task selection. C and C++ and Java are every bit as suited to the sort of tasks in the ICFP contest as they are to the things they are normally used for. What they are not suited to is doing them in a short period of time, in an exploratory programming manner, and without bugs.

        Type safety, fast compiled code, garbage collection -- all of these were all but irrelevant to the last two years' tasks. The main stumbling block both years had been writing parsers.

        Parsers aren't the problem. C has had parser generators for thirty years and besides the messages to be parsed were totally trivial. Dylan doesn't yet have any good tools for writing parsers, but it doesn't matter because we were able in the first eight hours of the contest to hand write a complete and correctly functioning (but stupid) program using nothing more complex than regular expressions, leaving the remaining 64 hours to think of something clever to do. Anyone using Perl or Python or Ruby probably finished the infrastructure even quicker than we did.
      • > Gawk all the way, got the project done in half an hour, if I
        > had used C or Java I'd probably have spent triple the time for
        > the same results.

        Heh, that was probably a 1-liner in APL.

        Of course APL programs longer than 1 line are usually unmaintainable, but no language is perfect...
    • all of them are very expressive languages with uncluttered code (compared to C++ or Java), completely type-safe, produce fast compiled code, and use garbage collection.
      More to the point, they're all basically functional, and Haskell is a pure functional language.

      I think we're going to see functional languages get a lot more popular soon because they're better for concurrent programming, and we're about to see a lot more multi-processor PCs.
  • by azuredragon23 (767827) on Thursday January 05, 2006 @10:07PM (#14406191)
    I have some beef to pick with the article: 1. It alleges that CMM5 organizations have about 1 defect/KLOC. Having worked and knowing such organizations, I can anecdotally confirm numbers like these are fiction. CMM5 certification has more to do with greasing palms rather than any absolute defect measurement. 2. A defect rate of 0.04bugs/KLOC is not zero bugs/KLOC. The difference is infinite in magnitude if that single bug is -- kills the user. 3. Low defect rates are more often a product of poor testing, not superior development.
  • by MerlynEmrys67 (583469) on Thursday January 05, 2006 @10:26PM (#14406275)
    Was working for a small startup with a bigtime CEO. One of his interseting points was, the most successful software companies in the world has yet to release a product anywhere close to "On Time". I can give you a train wreck litany of software companies that released products "On Time" that were so bug ridden people swore they would never buy another product from the company, and replace the product with a comptetitors.

    The end result - In a year, no one will remember that you were 6 months late - make a buggy release and in a year EVERYONE will remember the buggy release.

    Why I always have time to do it over, and never the time to do it right in the first place

  • by 2Bits (167227) on Thursday January 05, 2006 @11:59PM (#14406701)
    So, if this toolset and methodology are so good, I have to wonder why it does not get more widespread use? According to their info, it is developed in the 70's and 80's, so that's not new. And why are softwares so buggy and have such a lousy reputation anyway? Not to start a flamewar, but let's just list a few possible "reaons" here:

    1. Why aren't schools teaching this methodoly thoroughly? Why aren't this toolset and programming language taught in school by default? I learned a bit of Ada at school, but that's only part of a comparison between programming language design. So, are schools to be blamed? Or those profs don't know better? Why aren't proper engineering methodologies emphasized?

    2. Someone developed a nice methodology, with a nice toolset and programming language, and got greedy and made it too expensive to acquire. Other tools are good enough, and the resulting softwares are acceptable to the market, so, this nice thing never got widespread use.

    3. Programmers are asked to do the impossible. We (I include myself here) had to work with customers who don't know what they want, only give very fuzzy requirements (Praxis's customers, from their list, are different kind of animals, and they probably know better than most of the customers we had to work with), and even if we lay out the whole detailed plan in front of them, they still don't know what they want. They will agree to the plan, sign and approve it, and until you have completed the whole system according to the plan, they would ask to redo the whole thing. If a customer dares to ask a civil engineer to add 2 more stories between the 3rd and 4th floor after the custom-built building is finished, guess what would the civil engineer say? Programmers are asked to do this all the time (I know I have been asked to), so are customers to blame? You can't get the system done properly if requirements are shifting all the time.

    4. Programmers are a bunch of bozos who know shit about proper engineering. Yeah, I can take the blame, I've been programming for over a decade, and I know how programmers work: methodologies are for pimps! If a bridge engineer can't tell or prove how much load the bridge can take, I'm sure people would tell him/her that s/he has no business in building bridge.

    5. Customers of packaged softwares would buy a buggy software to save one buck anyway, why would vendors put extra efforts and costs to make it better? Look at the market, a lot of good softwares didn't survive, and sometimes, the worst of the line prospoered (no naming here!). So people get what they asked for.

    6. Customers (even custom-built projects customers) are a bunch of cheap folks, they would go to the least priced, no matter what. Praxis's customers are willing to pay 50% more for quality work, how many of your customers are willing to? We are willing to fix our bugs, free of charge, for the first 10 years too, if our customers are willing to pay 50% than the market rate for quality work. But so far, I've never met one such customer yet. Granted, I don't work in the defense industry. So, don't blame us for lousy work, if customers try to squeeze out every single buck out of it. And in China (and some other countries too), you have to pay a huge amount for kickback too, sometimes, as high as 80% of the project's budget.

    7. Software vendors are a bunch of greedy bastards, they put buggy softwares on the market, without having to accept any responsibility (just read your EULA!). Industry problem or government problem? Not enough regulations (for safety, for useability, etc)? Other industries seem to do ok, e.g. medical, civil, .... so, are software vendors a bunch of irresponsible kids that need constant monitoring?

    8. The indsutry is developing too fast, people are chasing the coolest, hippiest, most buzzword-sounding technologies. No one gives a shit about "real engineering". And there are simply too much to learn too, in how many industries can you say people are required to master that much technologi
    • by Coryoth (254751) on Friday January 06, 2006 @01:06AM (#14406997) Homepage Journal
      Thanks for a thoughtful post.

      And why are softwares so buggy and have such a lousy reputation anyway? Not to start a flamewar, but let's just list a few possible "reaons" here:

      I think, to be honest, that it is a combination of a number of the factors you mention.

      Why aren't schools teaching this methodoly thoroughly? Why aren't this toolset and programming language taught in school by default?

      To do proper formal specification, one of the key parts of Praxis' Correct by Construction approach, does require a decent solid mathematical background. I think a lot of CS departments, facing students who want vocational training, struggle to demand the sort of mathematical requirements that are needed. As to SPARK - it is something that Praxis developed themselves, and it is proprietary (the toolset at least, the annotation language is well documented). You can pick up a book and learn the language, but the tools cost money if you want to use them commercially. On the other hand, the base specification language Praxis uses, Z [wikipedia.org], is entirely open, and there are a variety of freely available tools for it [sourceforge.net]. There are also other specification langauges (I quite like CASL [wikipedia.org] which has a number of useful extensions) that have freely available tools associated with them. There's also JML [iastate.edu] and ESC/Java2 [secure.ucd.ie] which are freely available and seek to provide the same sort of functionality or Java that SPARK adds to Ada. There are places that teach JML [uni-bremen.de], but they are still few and far between.

      Programmers are asked to do the impossible.

      I think this is a big part of it in some ways. Partly this is because, for a large number of software projects, the degree of exactness and quality just isn't required. I don't need a professional architect to help me build a doghouse in my backyard (though I'd certainly want one if I was building a skyscraper), and I don't need assurances of bug free software for a simple web front-end to a database. At the same time programmers are often unwilling to let customers know exactly what the limits are when developing software. To quote you: "If a customer dares to ask a civil engineer to add 2 more stories between the 3rd and 4th floor after the custom-built building is finished, guess what would the civil engineer say?"; if software engineers aren't prepared to stand up for quality and tell customers that somethings can't be done without sacrificing the quality of the product the problem will remain. In part I think this is due to the fact that software development is a young industry, and programmers are still of the mentality that they need to do everything they possibly can to please a customer. Partly it's because software projects are diverse (as are building projects!) and sometimes it's okay to make late changes; sometimes it's how things ought to be done - the key is to identify exactly what sort of project it is as early as you can. Are you building a treehouse for your kids, which doesn't require exactness and benefits from incremental design and feedback, or are you building a 4 story building where quality is important, and late changes will jepordise that?

      Programmers are a bunch of bozos who know shit about proper engineering.

      Sadly this is partly the case. There are an awful lot of cowboys out there when it comes to software engineering. There are, of course, a lot of fantastic programmers as well who are otherwise beset by some of the other points mentioned. There is, however, a remarkable degree of tolerance for cowboys, sloppiness and lack of quality in software engineering that you don't see in other engineering disciplines. Partly I thin
  • The qualities of an ideal test [agileadvice.com] is a framework similar to ACID for databases and INVEST for user stories. It describes six verifiable qualities that a test should have. On a related note, this article doesn't make me really that excited. I've been using test driven development with JUnit and NUnit to deliver tens of thousands of lines of code into production with similar defect rates (about two defects found over the course of several years of code being in production). I maintained good productivity rates, delivered code into QA with no defects found, into pre-production with no defects found and finally into production where defects were found only after a great deal of time in operation. The code was not simple either: it was asynchronous, guaranteed delivery, multi-threaded messaging code for enterprise systems. Developers who don't do TDD should be paid about 1/4 as much as developers who do, IMNSHO.
  • by meburke (736645) on Friday January 06, 2006 @01:23AM (#14407076)
    This article couldn't have been more coincidental to my current project: I've been re-reading James Martin's books, "Application Development without Programmers" and "System Design from Provably Correct Constructs", with the goal of selecting a method to program mechanical devices.

    Martin's thesis, and remember this was back in the 70's and early 80's, was that the program should be generated from a specification of WHAT the program was to do, rather than trying to translate faulty specifications into code telling the computer HOW to do it. (Trust me, that poor sentence does not come close to describing the clarity of purpose in Martin's books.) Martin proposes that a specification language can be rigid enough to generate provably correct programs by combining a few provably correct structures into provably correct libraries from which to derive provably correct systems.

    The definition of the time, HOS (for Higher-Order Software) was actually used by a company called HOS, Inc.(!), and apparently worked pretty well. Many of the constructive ideas were included in OOP and UML, but ideally, if I understand the concept properly, it would be equivalent to generating software mostly from Use-Case analysis. There are similar approaches in MDD and MDA methodologies. I wonder what ever became of the HOS,Inc. and the HOS methods? It looks like they had a handle on round-trip software engineering in the 80's.

    OK, why would this be a good thing? Well, for one thing, computational/programmable devices are prolifierating at a tremendous rate, and while we can engineer a new device in 3 to 6 months, the programs for the device take 18 months to 3 years (if they are finished at all). Hardware development has greatly outpaced software development, by some estimations a 100x diference in capacity...yet they are built on the same fundamental logic!

    The best argument, IMO, is that since larger systems are logarithmically more complex, and since it is impossible to completely test even intermediately complex systems, it will require provably correct modules or objects to produce dependable systems. If the code is generated from provably correct components, then the system only has to be tested for correct outputs.

    Furthermore, code generated from provably correct components can allow machinery and devices to adapt in a provably correct way by rigorously specifying the outputs and changes to the outputs.

    Praxis is on a roll. The methodology employed is probably more important than the genius of the programmers. It should get better,though. The most mediocre Engineer today can produce better devices than the brilliant Engineers of 30 years ago using tested off-the-shelf components. IMO, this the direction programming should be taking.
  • by mcrbids (148650) on Friday January 06, 2006 @07:20AM (#14408055) Journal
    Screw funding. It's irrelevant.

    Screw specifications. Nobody has them anyways.

    Give me a clear, predefined spec, and I'll meet it. I'll guarantee bug fixes,too.

    But that's not how software evolves.

    Despite careful attention, despite voluminous meetings, emails, and specifications, I never get a clear idea what the client needs me to develop until AFTER a prototype has been built.

    In fact, I'd wage that there's a quasi-quantum principle at work: You can either work towards the customer's actual needs, or the predefined, agreed upon specification/costs/specifications. Answering either means ignoring the other.

    Consider this the Heisenberg Uncertainty principle. The software is half-dead, half-alive. Either it meets the needs of the customer (and associated scope creep, bugs, ets) or the originally defined specification. Releasing the software defines whether the cat is dead or alive.

    It seems that:

    1) People will commit, in aggressive fashion, that they need something until they get it, at which point, they'll angrily point out all the flaws in it.

    2) People don't actually know what they need until they see that what they have isn't it.

    3) When you take anything produced because of (1), and then compare that to the feedback produced by (2), you end up with cases where the code is producing a result unexpected in the original design.

    These are called bugs.

    4) The only intelligent way to proceed with (1) and (2) is to consider software an iterative process, where (1) and (2) combine with (3) and lots of debugging to result in a usable product.
  • by nicolaiplum (169077) on Friday January 06, 2006 @09:18AM (#14408345)
    At the age of 17 I spent a week in the Praxis offices on "Work Experience" (Americans may think of this as a very short internship), to find out what developing software would be like as a career. This was a major formative event of my life: I thought that developing software sounded good, I really liked using Real Computers (multiuser, multiprocessing systems with powerful operating software, like VMS and SunOS), and the people impressed me greatly. It definitely set me on the path to the career in systems development and administration that I have today.
    The person who made the biggest impression on me was the sysadmin. He got his own office-cube instead of having to share, he wore much more casual clothes and had a lot more hair and beard than most of the staff, he got to have big toys (several workstations, a LaserJet IIIsi big enough for an entire office that seemed to be his alone, etc) and he didn't seem to get much hassle from anyone else. This was obviously the job for me.
    The sysadmin was obviously rather a BOFH. When I was sat at the UNIX workstation for the first time, and had poked around with basic file-handling commands, I asked "What's the text editor on this system?". He answered "emacs - e m a c s - here's a manual" and picked about 300 sheets of paper off the Laserjet and handed it to me.
    I got to play with UNIX (SunOS), VMS, Oracle development environments. I still have the Emacs manual printout somewhere at home - it served me well when I went to University where printing anything out was charged by the sheet!
    I'm very glad they're still around.

You will be successful in your work.

Working...