Forgot your password?
typodupeerror
Programming Bug Technology

When Bugs Aren't Allowed 489

Posted by CowboyNeal
from the absolutely-positively-perfect dept.
Coryoth writes "When you're writing software for an air traffic control system, military avionics software, or an authentication system for the NSA, the delivered code can't afford to have bugs. Praxis High Integrity Systems, who were the feature of a recent IEEE article, write exactly that kind of software. In "Correctness by Construction: A Manifesto for High-Integrity Software" developers from Praxis discuss their development method, explaining how they manage such a low defect rate, and how they can still maintain very high developer productivity rates using a more agile development method than the rigid processes usually associated with high-integrity software development."
This discussion has been archived. No new comments can be posted.

When Bugs Aren't Allowed

Comments Filter:
  • Whatever (Score:5, Insightful)

    by HungWeiLo (250320) on Thursday January 05, 2006 @09:00PM (#14405811)
    When you're writing software for an air traffic control system, military avionics software, or an authentication system for the NSA, the delivered code can't afford to have bugs

    I've been in this industry for quite some time and let me be the first to say that I wish I could repeat this sentence with a straight face.
  • economics (Score:5, Insightful)

    by bcrowell (177657) on Thursday January 05, 2006 @09:06PM (#14405843) Homepage
    The authors contend that there are two kinds of barriers to the adoption of best practices... First, there is often a cultural mindset or awareness barrier... Second, where the need for improvement is acknowledged and considered achievable, there are usually practical barriers to overcome such as how to acquire the necessary capability or expertise, and how to introduce the changes necessary to make the improvements.
    No, the reason so much software is buggy is economics. Proprietary software vendors have to compete against other proprietary software vendors. The winners in this Darwinian struggle are the ones who release buggy software, and keep their customers on the upgrade treadmill. Users don't typically make their decisions about what software to buy based on how buggy it is, and often they can't tell how buggy it is, because they can't try it out without buying it. Some small fraction of users may go out of their way to buy less buggy software, but it's more profitable to ignore those customers.
  • Bugs are fine... (Score:5, Insightful)

    by Paladin144 (676391) on Thursday January 05, 2006 @09:13PM (#14405880) Homepage
    Luckily, bugs are just fine [washingtontimes.com] if you happen to run a company that builds voting machines, such as Diebold. And if you think that elections aren't in the same category as air traffic control, I suggest you take a tour of Iraq. Elections are very important for your continued existance upon the earth.
  • by User 956 (568564) on Thursday January 05, 2006 @09:16PM (#14405900) Homepage
    nearly unlimited funding probably helps too

    The old technology axiom applies:

    High Speed, Low Cost, High Quality.

    Pick 2 out of 3.
  • by No Such Agency (136681) <abmackay@gmai3.14159l.com minus pi> on Thursday January 05, 2006 @09:19PM (#14405920)
    It's not hard to produce nearly-bugless code when you have both the budget to do proper quality control, and the incentive to do so.

    The reason why Windows is not bugless is that they have the budget to properly debug it... but little incentive to do so before launch. The customers will purchase it anyway and gratefully accept bug fixes after the fact. Airports or the military who bought faulty mission-critical software would not be so forgiving.
  • by Coryoth (254751) on Thursday January 05, 2006 @09:25PM (#14405953) Homepage Journal
    In fact this is the whole point - Praxis manages to deliver software with several orders of magnitude less bugs than is standard in the software industry, but does so in standard industry time frames with developer productivity (over the lifecycle of the project) on par with most non-critical software houses. Praxis does charge more - about 50% above standard software daily rates - but then when you are getting the results in the same time frame with massively less bugs a paying little extra is worth it... you'll likely save money in the support and maintenance cycle!

    Jedidiah.
  • So... (Score:3, Insightful)

    by Coppit (2441) on Thursday January 05, 2006 @09:29PM (#14405979) Homepage
    nearly unlimited funding probably helps too :P
    I guess that's why Microsoft's software is so good.
  • by GileadGreene (539584) on Thursday January 05, 2006 @09:32PM (#14405994) Homepage
    Except that the articles in question aren't about finding and removing bugs in already-implemented code, they are about a method that allows one to construct code that doesn't have bugs in the first place.

    Linux, Firefox, and OpenOffice are some of the best software on the planet. I think is a good practical testament to the OSS philosophy.

    And yet they all still suffer from a metric crapload of bugs. Praxis produces software with so few bugs that they are willing to provide a warranty that says they'll fix any bug found within the first 10 years, for free. If their software had the defect rate of Firefox or OpenOffice they'd be bankrupt in short order.

  • Re:Here, here... (Score:5, Insightful)

    by drew (2081) on Thursday January 05, 2006 @09:34PM (#14406002) Homepage
    I've always attributed it to people being used to the behavior of MS Windows. And I'm not saying that to start a flamewar. I'm serious. Unreliable avionics systems should be unacceptable, but these days, that doesn't seem to be the case.

    Many years ago, I remember reading a quote from an employee at a major aircraft subcontractor along the lines of "If my company paid as much attention to the quality of our work as Microsoft, airplanes would be falling out of the sky on a weekly basis, and people would accept this as normal." I've heard many people, even programmers, claim that bugfree programs are impossible to write. They are not- they just cost far more in time and money than most companies can afford in this commercial climate. When success depends largely on being first to market and bugs and crashes are accepted as a normal fact of life, then they always will be a normal fact of life.

    Unfortunately, I think the blame lies at least in large part with the consumer. As long as people put up with programming errors in a $500 software suite that they would never accept in an $80 DVD player, we will continue to have these problems. Unfortunately, too many people still consider computers to be too much black magic that is out side of their (or anyone else's) grasp. Most people have little to know knowledge of how their car works under the hood, but they still believe that the engineer who designed it has enough knowledge to do it without making mistakes and expect the manufacturer to pay for those mistakes when they happen. Why should they believe any differently about the people who write the software they use?
  • by blair1q (305137) on Thursday January 05, 2006 @09:43PM (#14406056) Journal
    30 LOC is net. You spend the first 45% of a high-reliability project doing the design work, and the last 45% doing the verification. The 10% in the middle is code generation.

    These guys seem to be claiming they can reduce redundancy in the design work, and rework in the verification work. They're doing it by using a design-description method that prevents unambiguity (and therefore using a team that is TRAINED to write unambiguous requirements, so their magic language may not be the key), a coding method that avoids unprovable structure (and probably eliminates a lot of other sorts of flexibility), and a verification method that first validates the design and then verifies the code as it's produced (no new value there as everything has to be touched at least once anyway, and if a big bug turns up that causes a lot of code to be redone you have to redo formal verification on those units again; something that's less likely if formal verification is delayed until full-alpha code is demonstrated, having been informally verified along the way).

    Their claims of massive error reduction are, at best, anecdotal. Let's see them do this after taking over a half-coded project with minimal design requirements, a hard deadline, and a budget that can be cut by governmental forces at will.
  • by iluvcapra (782887) on Thursday January 05, 2006 @09:49PM (#14406087)

    TFA cites a particular NSA biometric identification program which has "0.00" errors per KSLOC.

    Now, this got me thinking. It is completely possible for a biometric identification program to identify two different individuals as the same person (like identical twins), or for it give a false negative identification (dirt on a lense, etc). Is this a bug? The code is perfect: no memory leaks, the thing never halts or crashes or segfaults, all the functions return what they should given what they are.

    I think the popular definition of "bug" tends to catch too many fish, in that it seems to include all the behaviors a computer has when the user "didn't expect that output," what a more technical person might call a "misfeature." TFA outlines a working pattern to avoid coding errors, not user interface burps -- like for example, giving a yes/no result for a biometric scan, when in fact it's a question of probabilities and the operator might need to know the probabilities. Such omissions (the end user would call this a 'bug'), are solved thru good QA and beta-testing, but TFA makes no mention of either of these things, and seems to think that good coding is the art of making sure you never dereference a pointer after free()'ing it. It does mention formal specification, but that is only half the job, and alot of problems only become clear when you have the running app infront of you.

    Discussion of TFA has its place, but it promises zero-defect programming, which is impossible without working with the users.

  • by Fnord666 (889225) on Thursday January 05, 2006 @09:53PM (#14406111) Journal
    I do. Not intentionally, that's just how it usually works out.
  • Re:economics (Score:3, Insightful)

    by ChrisA90278 (905188) on Thursday January 05, 2006 @09:56PM (#14406124)
    "No, the reason so much software is buggy is economics. Proprietary software vendors have to compete against other proprietary software vendors."

    No, that's not it either. Bugs happen because the people who buy the software do not demand bug free code. I do write software for a living. When the customer demends bug-free software he gets it.

    I've been around the building bussines too. when I see por work there, say badly set tile, I don't blame the tile setter so much as the full ideot who paid the tilesetter after looking at his poor work.

  • by georgewilliamherbert (211790) on Thursday January 05, 2006 @10:06PM (#14406185)
    I'm sure one could point to, for example, Fog Creek software as another example of somewhere that does a remarkable job with small teams.

    The key point is this: small teams. It's a lot easier to find the people who can produce 10x better (in terms of rate of writing, clean/bug free code, whichever metrics you care for) when you need to find 3 or 5 or 10 people. You can't staff a whole large application development project with the best gurus: there aren't enough out there in the world.
  • by Anonymous Coward on Thursday January 05, 2006 @10:07PM (#14406192)
    Open source model doesn't work for most (99%?) of military and government applications.

    You're wrong. Open source doesn't mean "let the world see". It means "let the people using the code see".

    Military people are particularly fond of open (to them) source (or binary objects so simple that a disassembly is readable), just as they're fond of having complete design specs for their artillery. It doesn't mean they tell "Teh Enemy" the source, just as I am under no obligation to disclose the source of modifications I've made to the linux kernel to anyone other than those I give copies of the modified kernel to.

    Thinking "Open Source" means "openly downloadable by everyone on the planet" is the #2 mistake I see closed source weenies making. (#1 is thinking open source means anyone on the planet can openly UPload to open source CVS repositories. That is such an idiotic notion, I don't know where to begin with them.)

    Anyway the military will completely ignore I"P" laws if it suits them (But hey, I"P" is really bullshit...).

  • by Anonymous Coward on Thursday January 05, 2006 @10:16PM (#14406231)
    "Yes, yes, open source projects fix bugs for free. The point is that they can afford to do that, because they have so many volunteers to do the bug-fixes."

    The Praxis approach starts with the mindset that bugs are bad and shouldn't leave the room. Open-source has much looser requirements. Bugs can be released with the attitude that "a thousand eyes" will catch the mistake. Also unmentioned by OSS advocates is the problems caused during the duration of the bug being unfixed, till those eyes catch it (which isn't guarenteed). Also OSS isn't guarenteed (read the disclaimer, and remember the "/." story awhile back about programmers being held liable for problems with their code. Read the responses) while Praxis is. And last Praxis plays in a field that OSS doesn't. (mission/life critical)
  • Re:Well Duh! (Score:3, Insightful)

    by blincoln (592401) on Thursday January 05, 2006 @10:37PM (#14406334) Homepage Journal
    My motto is: "If you strive for perfection, then the end result will always be better than settling for mediocrity."

    That's pretty much what I was thinking - that this company's results are not especially due to their methods, but due to hiring highly-skilled developers who know what they're doing and care about doing it right.

    Half of the "enterprise" applications I've worked with were built on a foundation of absolute shit - too many elements of their core design were based on flawed thinking. No amount of money and time would have let their developers make them work properly.

    My current opinion is that that kind of software is made by people who capitalize on the bureaucracy of corporations. I don't control what product is purchased here, so the salespeople for OverhypedFlashInThePanSoft only have to make a businessperson - rather than an engineer - think that their software is what they need to buy.
  • Re:Flamebait this! (Score:3, Insightful)

    by RingDev (879105) on Thursday January 05, 2006 @10:38PM (#14406337) Homepage Journal
    You are completely missing the point of CbyC development. One of the fundamentals is that these apps are RESISTANT TO CHANGE. So yes, MS could make a much more solid OS, IF you ran their, and only their proprietary hardware (as Apple used to/still does), only used the pre-installed applications in the exact manor they were designed to be used, and never drempt of changing anything.

    When you are talking about an air traffic control system, you can set a very specific set of requirements. The Air Traffic Control system will never have to open an Excel worksheet, or run Quake 4, or be compatible with hundreds of other vendors tools. The Air Traffic Control system will never have to deal with someone swaping graphics cards and updating drivers. It doesn't have to worry about spyware and root kits. It doesn't have to worry about internet access.

    If you want to rag on MS, go for it, but don't think CbyC is the answer. It would only result in an OS that you wouldn't want to use. (As a consumer it would be worthless, but it could be great for imbedded systems)

    -Rick
  • by misanthrope101 (253915) on Thursday January 05, 2006 @10:52PM (#14406407)
    More likely they're selling, along with their software, peace of mind. Software designed/implemented by humans will have bugs, though there are methods for minimizing them. Managers of critical systems no doubt love to project the image to the public that there aren't bugs in their system, but the probability of that being true is miniscule. But people like to hear it, and so managers and marketers keep speaking the language.

    It's like all the suits who love to say "failure is not an option," but then we see the occasional failure, but people still say "failure is not an option" because it's the attitude they're trying to convey, not the reality. The right attitude will bring about the right reality, or so management would have you believe.

  • Re:PRODUCTIVITY? (Score:2, Insightful)

    by Nato_Uno (34428) on Thursday January 05, 2006 @10:56PM (#14406433)
    I can't speak for the original poster, but I can't believe that the parent poster and I are the *only* people that believe that LOC is a poor metric.

    Measuring lines of code added per day causes deletions and modifications to be considered *bad*. If I add 10 lines today and those 10 lines allow me to delete 20 lines from last year then I have a net productivity of -10 LOC and I have been unproductive.

    The argument *could* be advanced that if I'd done it right in the first place the original 20 LOC that I'm deleting should have been the 10 LOC I added today so I *should* be penalized for doing poor work last year. I consider this argument to be shortsighted. What if we're interfacing with a new library? Is that a bad thing from a productivity perspective? What if adding a new feature involves rewriting 10 existing lines and interspersing 10 new ones? Is that less productive than tacking on 20 lines today?

    Counting lines added, lines changed, and lines modified as a metric is nearly as bad. The reason these metrics fail, IMHO, is that lines of code have no "average" value. Some lines are more valuable, some are less valuable, some have no meaning in the context of others, some have so much value that they cause others to be obsolete, etc. Grouping them together is like measuring the intelligence of a room of people by adding up the number of people present - meaningless.

    Even your point, that a single line of code implies a specific amount of background work ("heavy addition non-programming work") is fallacious, IMHO. Does each feature have equal merit? Equal difficulty? Does each line of code imply exactly the same (or even *about* the same) amount of "heavy addition non-programming work"?

    This is certainly not true for me - the difficulty of the code I write varies wildly, both within and between applications. I can assure you that I would likely expend much more effort on each line of code for an optimized backend search engine than I would on the CGI for the web interface that drives it. Is it therefore less productive for me to work on the backend because I expend more effort per line of code?

    IMHO, LOC is only useful for publishing statistics, not for measuring meaningful changes in productivity.
  • by iggymanz (596061) on Thursday January 05, 2006 @11:21PM (#14406539)
    You did see the counts of lines of code in Praxis' projects? The header files alone in Linux have more lines of code than all Praxis ever wrote or ever will write.
  • by Sycraft-fu (314770) on Thursday January 05, 2006 @11:46PM (#14406638)
    That's how much of the system you control. By system I mean the entire channel, including the hardware, software, input, etc. It's much easier to engineer low defect software when you control all of that. For example if you are developing something that runs only on one particular version of one kind of embedded device, and can only have input given to it in a certian way, and you can gaurentee that it is the only thing running at a given time, and so on.

    Much harder if you are making something that has to run on a massive set of arbitrary hardware that can have any number of other, quite possibly buggy, apps running and that can recieve all kinds of bad input through all kinds of different channels.

    That's part of the problem I see is that people look at systems that are engineered and controlled by one company, and then think that software that runs on comoddity hardware should be as reliable as something where everything is carefully controlled.

  • by 2Bits (167227) on Thursday January 05, 2006 @11:59PM (#14406701)
    So, if this toolset and methodology are so good, I have to wonder why it does not get more widespread use? According to their info, it is developed in the 70's and 80's, so that's not new. And why are softwares so buggy and have such a lousy reputation anyway? Not to start a flamewar, but let's just list a few possible "reaons" here:

    1. Why aren't schools teaching this methodoly thoroughly? Why aren't this toolset and programming language taught in school by default? I learned a bit of Ada at school, but that's only part of a comparison between programming language design. So, are schools to be blamed? Or those profs don't know better? Why aren't proper engineering methodologies emphasized?

    2. Someone developed a nice methodology, with a nice toolset and programming language, and got greedy and made it too expensive to acquire. Other tools are good enough, and the resulting softwares are acceptable to the market, so, this nice thing never got widespread use.

    3. Programmers are asked to do the impossible. We (I include myself here) had to work with customers who don't know what they want, only give very fuzzy requirements (Praxis's customers, from their list, are different kind of animals, and they probably know better than most of the customers we had to work with), and even if we lay out the whole detailed plan in front of them, they still don't know what they want. They will agree to the plan, sign and approve it, and until you have completed the whole system according to the plan, they would ask to redo the whole thing. If a customer dares to ask a civil engineer to add 2 more stories between the 3rd and 4th floor after the custom-built building is finished, guess what would the civil engineer say? Programmers are asked to do this all the time (I know I have been asked to), so are customers to blame? You can't get the system done properly if requirements are shifting all the time.

    4. Programmers are a bunch of bozos who know shit about proper engineering. Yeah, I can take the blame, I've been programming for over a decade, and I know how programmers work: methodologies are for pimps! If a bridge engineer can't tell or prove how much load the bridge can take, I'm sure people would tell him/her that s/he has no business in building bridge.

    5. Customers of packaged softwares would buy a buggy software to save one buck anyway, why would vendors put extra efforts and costs to make it better? Look at the market, a lot of good softwares didn't survive, and sometimes, the worst of the line prospoered (no naming here!). So people get what they asked for.

    6. Customers (even custom-built projects customers) are a bunch of cheap folks, they would go to the least priced, no matter what. Praxis's customers are willing to pay 50% more for quality work, how many of your customers are willing to? We are willing to fix our bugs, free of charge, for the first 10 years too, if our customers are willing to pay 50% than the market rate for quality work. But so far, I've never met one such customer yet. Granted, I don't work in the defense industry. So, don't blame us for lousy work, if customers try to squeeze out every single buck out of it. And in China (and some other countries too), you have to pay a huge amount for kickback too, sometimes, as high as 80% of the project's budget.

    7. Software vendors are a bunch of greedy bastards, they put buggy softwares on the market, without having to accept any responsibility (just read your EULA!). Industry problem or government problem? Not enough regulations (for safety, for useability, etc)? Other industries seem to do ok, e.g. medical, civil, .... so, are software vendors a bunch of irresponsible kids that need constant monitoring?

    8. The indsutry is developing too fast, people are chasing the coolest, hippiest, most buzzword-sounding technologies. No one gives a shit about "real engineering". And there are simply too much to learn too, in how many industries can you say people are required to master that much technologi
  • by Fulcrum of Evil (560260) on Friday January 06, 2006 @12:08AM (#14406753)

    But if companies are going to just throw up their hands and say "we can't hire competent people, there aren't enough of them in the world, they only doom themselves to a continued shortfall in talent, and an increase in buggy software.

    What they really meant was "we can't hire competent people at the same prices as Jimmy the mounth-breather." Take that as you will.

  • Rules of thumb (Score:4, Insightful)

    by jd (1658) <.imipak. .at. .yahoo.com.> on Friday January 06, 2006 @12:17AM (#14406796) Homepage Journal
    1. Doubling the number of coders will double the number of bugs and double the total time
    2. Doubling the budget will double the number of coders
    3. Extendable projects will, deadlines kill
    4. There is no such thing as a potential bug
    5. "Good" methods are cheap for customers, sloppy methods are profitable for companies.


    In the end, software companies are in it for the profits. They have no lemon laws to respect, they have no trades description act to obey, no ombudsmen to answer to, no consumer rights groups to speak of, no Government-imposed standards certification and virtually no significant competition. Customers are often infinitely patient and completely ignorant of what they should be getting - the machines are like Gods and the software salesmen are their High Priests. To question is to be smote.


    Were standards to be mandated - perhaps formal methods for design, OR quality certification of the end result, you would see no real impact on net software costs. Starting costs would go up, but long-term costs would go down.


    Nor would you see any serious impact on variety - if anything, there is a greater range of car manufacturer and design today than there was in the 50s and 60s when cars had the unnerving habit of exploding for no apparent reason.


    What you'd see is a decline in stupid bugs, a decline in bloat, an increase in modularity, possibly a reduction in latency and a move from upgrades to fix things that SHOULD have worked in the first place to enhancing things that can be relied upon to CONTINUE working fter the patches.


    Money would not be made by selling the same product with a different set of defects to the same market, money would be made by always going beyond last year's horizons. The same way most manufacturers, from cars to camping gear to remote control aircraft to air conditioning units to microwave ovens to home stereo manufacturers have all been doing - very successfully - for a very long time.


    The IT industry isn't going to change in the foreseeable future, the only way we'll see change in our lifetimes is if it is imposed on the Pointy Haired Bosses. We could easily see 99.9% reliable software, with no additional cost, in our homes in a year, with the lack of constant fixes actually saving money. And that's why it won't happen. Not because the IT corporations are mean, thuggish and ogreish - they are, it just isn't way it won't happen.


    It won't happen because they're geared both towards the profit motive and towards the outdated notion that the market is tiny. (That last part was true - in the 1950s, when entire countries might have three or four computers in total, operating in two, maybe three different capacities. You can understand a desire to go after the after-sales service, when there simply isn't anything else left to do.)


    Today, Microsoft's Windows resides on 98% of the desktop computers, but because of the support system needed to run the damn things, 98% of the world's population didn't have significant access to one. Ok, putrid green is a lousy colour, but the idea of clockwork near-indestructible laptops that - in theory - could be built to weigh 5 lbs or less and run high-end, intensive applications is beginning to filter through to the brain-dead we call politicians.


    You think someone in the middle of Ethiopia who is fluent only in their native tounge is going to want to pay for telephone technical support from someone in India, in order to figure out why their machine keeps locking up?


    When computing is truly available to the masses (ie: when even a long-forgotten South American tribe can reasonably gain access to one), the ONLY way it can be remotely practical is if said South American can look forward to a reliable, usable, practical experience where all usage can be inferred from first principles, and where NO software service calls are required to get things to work, ONLY required to get more things for working with.

  • The qualities of an ideal test [agileadvice.com] is a framework similar to ACID for databases and INVEST for user stories. It describes six verifiable qualities that a test should have. On a related note, this article doesn't make me really that excited. I've been using test driven development with JUnit and NUnit to deliver tens of thousands of lines of code into production with similar defect rates (about two defects found over the course of several years of code being in production). I maintained good productivity rates, delivered code into QA with no defects found, into pre-production with no defects found and finally into production where defects were found only after a great deal of time in operation. The code was not simple either: it was asynchronous, guaranteed delivery, multi-threaded messaging code for enterprise systems. Developers who don't do TDD should be paid about 1/4 as much as developers who do, IMNSHO.
  • by Detritus (11846) on Friday January 06, 2006 @01:30AM (#14407112) Homepage
    That's your job. If a customer were able to build a clear set of requirements, then they would likely have the skills to build their own systems.

    Do I get to strap them down, put bamboo splinters under their fingernails, and inject them with truth serum?

    There isn't much that you can do when the customer is uncooperative and doesn't want to get involved or admit their ignorance.

  • by Coryoth (254751) on Friday January 06, 2006 @02:15AM (#14407281) Homepage Journal
    Then their ability to produce bug-free code depends, as usual, on control factors, not on real-world engineering.

    In as much as a civil engineer depends on control factors via refusing customers who demand that the building have 6 stories not 4 just one month before construction is due to finish, yes. Real world engineering makes certain demands of the client. Someone who wants to build a treehouse for their kids doesn't consult an architect and a civil engineer, and civil engineers don't take contracts from people who refuse to set out some limits on what they want built, and what they expect of it.

    Praxis uses solid engineering. Their "Correct by Construction" approach is solidly grounded in axiomatic mathematics and uses similar sorts of formal calculations and logical and mathematical proofs as you might expect to see from civil, electrical, aerospace, or ny other kind of engineers. Take the time to read sample chapters [praxis-his.com] from the SPARK book to get an idea of exactly what they are doing. There is very definitely quite solid engineering going on.
  • by Angostura (703910) on Friday January 06, 2006 @03:29AM (#14407476)
    Whoops, well that just about wraps it up for open source then. I've always thought that Linux was the absolute exemplar of slowly developed high quality low cost code. ... Although I suppose The Hurd might be an even better exemplar one day.

  • by kassemi (872456) on Friday January 06, 2006 @04:58AM (#14407725) Homepage

    If there was demand for programmers of the caliber you mention and companies willing to pay salaries deserving of such abilities there would be more people studying towards such a position.

    Although you can teach a body all the skills necessary to program, you still need a certain level of competence that doesn't come from education. I could easily (with time) teach plenty of people I know how to program, but only a relative few will ever be able to actually invent on their own... Invention and innovation both being qualities I require the presence of when dubbing someone a guru.

    On top of that, most of the people who have such intuition and innovative qualities are generally located in a field that doesn't offer them much pay. Mathematics, physics and logic - to name a few - historically don't offer too much in the way of funds. These people love their jobs. It's not about the new car and the trophy wife, it's about discovering a new flag protein or integrating a difficult formula... It would be bad business for a company to pay more... They just don't have too.

  • by mcrbids (148650) on Friday January 06, 2006 @07:20AM (#14408055) Journal
    Screw funding. It's irrelevant.

    Screw specifications. Nobody has them anyways.

    Give me a clear, predefined spec, and I'll meet it. I'll guarantee bug fixes,too.

    But that's not how software evolves.

    Despite careful attention, despite voluminous meetings, emails, and specifications, I never get a clear idea what the client needs me to develop until AFTER a prototype has been built.

    In fact, I'd wage that there's a quasi-quantum principle at work: You can either work towards the customer's actual needs, or the predefined, agreed upon specification/costs/specifications. Answering either means ignoring the other.

    Consider this the Heisenberg Uncertainty principle. The software is half-dead, half-alive. Either it meets the needs of the customer (and associated scope creep, bugs, ets) or the originally defined specification. Releasing the software defines whether the cat is dead or alive.

    It seems that:

    1) People will commit, in aggressive fashion, that they need something until they get it, at which point, they'll angrily point out all the flaws in it.

    2) People don't actually know what they need until they see that what they have isn't it.

    3) When you take anything produced because of (1), and then compare that to the feedback produced by (2), you end up with cases where the code is producing a result unexpected in the original design.

    These are called bugs.

    4) The only intelligent way to proceed with (1) and (2) is to consider software an iterative process, where (1) and (2) combine with (3) and lots of debugging to result in a usable product.
  • Re:Here, here... (Score:2, Insightful)

    by rufty_tufty (888596) on Friday January 06, 2006 @08:02AM (#14408152) Homepage
    Actully part of designing a reliable system is coping with faulty hardware! If you're writing software for satellites (for example) you often can't rely on anything working.

    This is completely opposite to the state at the moment, where people assume perfect hardware and buggy software; it should be "hardware that can and does fail" and "software that expects this and deals with it" because software doesn't age or have different faults depending upon which copy it is.
  • by safXmal (929533) on Friday January 06, 2006 @10:20AM (#14408648)
    Quote " 1) Specification errors - the spec says to do something (or neglects to say something) that even if implemented correctly will not cause the correct result"

    A lot of theses errors are becouse software is designed by programmers and not by the users.

    Part of my job is spefifying new functions in the web applications of my company. The normal flow of information is from me as business user to the software designer to the coder and then I have to give my comments on the finished product.

    More often then not,the first round we have to srap the implementation of the new function and start over again. Not because the coder didn't do a good job but because the designer didn't have a clue what my spêcifications meant or how they should be implemented.

    How can I explain to somebody who doesn't know anything about my field of work what I really want. Whenever I speak with him, or her, I catch myself going in "stupid mode". I will try to dumb everything down and leave out the exeptions and finer points as I would with some new trainee.

    The only times we were able to get something working from the first go was when I could speak directly to the coders with the designer acting as a moderator.

    Personally I found it a lot easier to talk to them too. I would just tell them where I wanted things to be and how the program should react and the coders could give me immediate feedback if it was possible or not.

    When I had to talk to the designer I would never know what I would get back - I would tell him something like : "in this table exchange the column shipment number with the column delivery number" and he would come back saying that it was impossible to do so they replaced the delivery number by the transport order number because according to him that would solve my problem - which it didn't.
  • by lp-habu (734825) on Friday January 06, 2006 @10:40AM (#14408787)
    You can't staff a whole large application development project with the best gurus: there aren't enough out there in the world.

    And why aren't there?

    For the same reason there aren't enough really good NFL quarterbacks for every team to have one, despite the money that is spent in trying to find them.

    People differ in ability in every field; the bell curve is real, and only the people who are at the high end of the curve can be considered one of "the best gurus". They will never constitute a large percentage of the group. Ever. Furthermore, there is usually a huge difference in performance between people who are in the top 10% of their field and those who are in the top 0.1% of their field. Most people would consider those in the top 10% as "the best gurus", but really it's only that tiny segment at the very top who deserve the appelation. Even then, you can expect a marked difference between those in the to 0.1% and those in the top 0.01%. Fact of life, folks.

  • by ModelerRick (728925) on Friday January 06, 2006 @11:04AM (#14408956)
    I've been re-reading James Martin's books, "Application Development without Programmers" and "System Design from Provably Correct Constructs", with the goal of selecting a method to program mechanical devices.

    Martin's thesis, and remember this was back in the 70's and early 80's, was that the program should be generated from a specification of WHAT the program was to do, rather than trying to translate faulty specifications into code telling the computer HOW to do it. (Trust me, that poor sentence does not come close to describing the clarity of purpose in Martin's books.) Martin proposes that a specification language can be rigid enough to generate provably correct programs by combining a few provably correct structures into provably correct libraries from which to derive provably correct systems.

    IBM had a major intiative back in the mid-1980s called AD/Cycle which was tied to SAA (System Application Architecture) which was based on these and similar ideas prevalent back then. This is the old "holy grail" and an attempt to fix the waterfall methods of development, which had actually been since the early 1950s with mixed success in delivering software on-time and in-budget.

    AD/Cycle involved not only IBM but a number of "AD/Cycle partner" companies like Bachman, and KnowledgeWare. KnowlegeWare's CEO was the former scrambling NFL quarterback Fran Tarkenton. A google of "fran tarkenton knowledgeware" will turn up references to Jim Martin, as well as some interesting things about how the company ended up.

    An incredible amount of development money went down the rat-hole chasing the AD/Cycle dream.

    The problem turns out to be the difficulty, if not impossibility, of creating rigorous specifications which produce useful results in the face of problems which aren't very well understood at the outset. The less the requirement is for a "black box" with well-defined inputs and outputs the more this is likely to be the case.

    Many problems turn out to be wicked that there is a feedback loop between the implementation and the requirements. A classic book from the era was Wicked Problems, Righteous Solutions (http://www.amazon.com/gp/product/013590126X/102-5 477977-4320940?v=glance&n=283155 [amazon.com]) Which might be considered one of the old testament texts pointing to today's "agile development" movement.

    A non-software example of a wicked problem is city planning in which implementing changes in the road network, housing developments, shopping center location etc. all change the requirements for the same aspects.

    Many wicked problems come from "requirements" which often do (or should or must) come from users. Often, the real requirements aren't known until an implementation is given to the users, who then might say, "yup, you implemented exactly what I asked for, but now that I see it, here's what I really want." Or, "Now that we've added this other thing (application, system, business division) why, doesn't this (work more like/interface with/replace/...) that."

    Faced with this, a methodology based on "correct" construction from "rigourous" specifications simply moves the problem to debugging the requirements.

    Until we do away with the need to change/adapt systems to changing/evolving requirements, which would likely involve eliminating users, this approach will have limited applicability, and will need to stand beside other more widely used incremental development models.

  • by drrobin_ (131741) on Friday January 06, 2006 @11:11AM (#14409013)
    Almost. What if standard out is a pipe that got closed? What if there's not enough memory to run the basic interpreter? What if it gets a kill signal from the kernel? What if it becomes a zombie process?

    As you can see, even the simple 10 PRINT "HELLO WORLD" isn't bug free. To make bug free software you don't need to catch most errors, you need to catch every possible error.
  • by Ced_Ex (789138) on Friday January 06, 2006 @11:24AM (#14409107)
    Or "We can have three guys fresh out of college with really impressive degrees for the same price as this other person who is older, and therefore less likely to know about modern computing topics. We will therefore hire the three college graduates, who will then do three times as much useful work."

    Or on the other hand, "We could hire three college graduates, who will then do three times as much useful work in three times the amount of time required by the old guy with total experience greater than the three graduates put together!"

    Just because guys get old, doesn't mean they stop learning. Good ones are always updating their skill set.
  • by GileadGreene (539584) on Friday January 06, 2006 @12:34PM (#14409692) Homepage
    And one thing I don't want to see is formal programming CS programs which produce CS professors exclusively ... though some of my CS academic aquaintences dislike me saying so, from what I see, few of their graduates go into coding in industry. That's a pretty unfortunate thing for the world. To be ultimately useful, these skills need to become things which take people out of college, into industry, and successfully into challenging industry projects.

    Maybe we could start by not assuming that CS grads should be going into industry. CS programmes should be teaching Computer Science - you know, the stuff that prepares you for a career in research. Industrial coders should be going through a software engineering program, in which they learn how to apply the results of scientific research to practical real world problems.

    Just as with other sciences and engineering disciplines, there will likely be a lot of overlap in subject matter. But the fundamental focus is entirely different: there will be material covered in one programme that isn't covered in another, and the perspective taken on the overlap material will be quite different.

    Sadly, many developers are graduates from CS programmes instead of engineering programmes, and too many of the "software engineering" programmes out there have little resemblance to the engineering training found in other disciplines - which is, IMHO, one of the reasons the world of industrial software development is so screwed up.

  • by zCyl (14362) on Friday January 06, 2006 @02:27PM (#14410557)
    If there was demand for programmers of the caliber you mention and companies willing to pay salaries deserving of such abilities there would be more people studying towards such a position.

    But if companies are going to just throw up their hands and say "we can't hire competent people, there aren't enough of them in the world, they only doom themselves to a continued shortfall in talent, and an increase in buggy software.


    Part of the problem is simply that the higher you draw the threshold, you start to get into areas of natural talent which seem to be more difficult to train. But an even bigger problem is the problem of identifying the people with the highest skills. When you're staring there looking at a resume, the best and brightest with the skills to write the most reliable code don't have resumes that look much different from everyone else's resume. Usually programmers can spot which other programmers have abnormally high skill in this area after working with them, so you can find these people by word of mouth, but this doesn't get automatically translated into resume content which can allow you to pick the correct employee out of the crowd.

    My best advice to anyone reading is, if you think you can code like this, and you want to seek out a high salary job based on your unique skill, then you can try proving your skill by releasing open source code which demonstrates this fact. This could give you a boost, but not always one comprehended by the people responsible for hiring at all locations.
  • by bill_kress (99356) on Friday January 06, 2006 @02:51PM (#14410755)
    What a lot of people don't realize is that being a Guru is an art.

    What's the quickest way to paint the roof of the Sistine chapel? Are you going to be able to hire 30 artists with enough talent, or should you stick with the one that is qualified and a couple assistants and just wait a few years?

    Can you train 30 artists to be good enough to do the work? How about 300?

    After a point, being a super-coder is just as much of an art. You won't be able to produce these people, it's kinda in their soul. Great musicians pick up their first instrument and know it's what they are going to do--what they are made for. My guess would be that if you have had access to a computer for over a year and you aren't coding yet, you'll probably never be a really great coder--a real computer artist couldn't have resisted.

    Hmm, maybe a better word that Guru or Architect would be Computer Artist or Code Artist? It should convey the relative rarity much better.

    This should be obvious. Every other art has it's gurus, and they are usually the top 1%, 99% of the others in the field simply will never be able to do what the gurus do, regardless of training or experience. I'll never play the piano like a savant that started at age 3, period.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...