When Bugs Aren't Allowed 489
Coryoth writes "When you're writing software for an air traffic control system, military avionics software, or an authentication system for the NSA, the delivered code can't afford to have bugs. Praxis High Integrity Systems, who were the feature of a recent IEEE article, write exactly that kind of software. In "Correctness by Construction: A Manifesto for High-Integrity Software" developers from Praxis discuss their development method, explaining how they manage such a low defect rate, and how they can still maintain very high developer productivity rates using a more agile development method than the rigid processes usually associated with high-integrity software development."
Whatever (Score:5, Insightful)
I've been in this industry for quite some time and let me be the first to say that I wish I could repeat this sentence with a straight face.
economics (Score:5, Insightful)
No, the reason so much software is buggy is economics. Proprietary software vendors have to compete against other proprietary software vendors. The winners in this Darwinian struggle are the ones who release buggy software, and keep their customers on the upgrade treadmill. Users don't typically make their decisions about what software to buy based on how buggy it is, and often they can't tell how buggy it is, because they can't try it out without buying it. Some small fraction of users may go out of their way to buy less buggy software, but it's more profitable to ignore those customers.
Bugs are fine... (Score:5, Insightful)
the old axiom applies (Score:3, Insightful)
The old technology axiom applies:
High Speed, Low Cost, High Quality.
Pick 2 out of 3.
You mean, it's not hard when... (Score:3, Insightful)
The reason why Windows is not bugless is that they have the budget to properly debug it... but little incentive to do so before launch. The customers will purchase it anyway and gratefully accept bug fixes after the fact. Airports or the military who bought faulty mission-critical software would not be so forgiving.
Re:nearly unlimited funding (Score:5, Insightful)
Jedidiah.
So... (Score:3, Insightful)
Re:Paper doesn't mention open source model (Score:5, Insightful)
Linux, Firefox, and OpenOffice are some of the best software on the planet. I think is a good practical testament to the OSS philosophy.
And yet they all still suffer from a metric crapload of bugs. Praxis produces software with so few bugs that they are willing to provide a warranty that says they'll fix any bug found within the first 10 years, for free. If their software had the defect rate of Firefox or OpenOffice they'd be bankrupt in short order.
Re:Here, here... (Score:5, Insightful)
Many years ago, I remember reading a quote from an employee at a major aircraft subcontractor along the lines of "If my company paid as much attention to the quality of our work as Microsoft, airplanes would be falling out of the sky on a weekly basis, and people would accept this as normal." I've heard many people, even programmers, claim that bugfree programs are impossible to write. They are not- they just cost far more in time and money than most companies can afford in this commercial climate. When success depends largely on being first to market and bugs and crashes are accepted as a normal fact of life, then they always will be a normal fact of life.
Unfortunately, I think the blame lies at least in large part with the consumer. As long as people put up with programming errors in a $500 software suite that they would never accept in an $80 DVD player, we will continue to have these problems. Unfortunately, too many people still consider computers to be too much black magic that is out side of their (or anyone else's) grasp. Most people have little to know knowledge of how their car works under the hood, but they still believe that the engineer who designed it has enough knowledge to do it without making mistakes and expect the manufacturer to pay for those mistakes when they happen. Why should they believe any differently about the people who write the software they use?
Re:productivity around 30 LOC per day (Score:4, Insightful)
These guys seem to be claiming they can reduce redundancy in the design work, and rework in the verification work. They're doing it by using a design-description method that prevents unambiguity (and therefore using a team that is TRAINED to write unambiguous requirements, so their magic language may not be the key), a coding method that avoids unprovable structure (and probably eliminates a lot of other sorts of flexibility), and a verification method that first validates the design and then verifies the code as it's produced (no new value there as everything has to be touched at least once anyway, and if a big bug turns up that causes a lot of code to be redone you have to redo formal verification on those units again; something that's less likely if formal verification is delayed until full-alpha code is demonstrated, having been informally verified along the way).
Their claims of massive error reduction are, at best, anecdotal. Let's see them do this after taking over a half-coded project with minimal design requirements, a hard deadline, and a budget that can be cut by governmental forces at will.
Bugs and Beta testing. (Score:5, Insightful)
TFA cites a particular NSA biometric identification program which has "0.00" errors per KSLOC.
Now, this got me thinking. It is completely possible for a biometric identification program to identify two different individuals as the same person (like identical twins), or for it give a false negative identification (dirt on a lense, etc). Is this a bug? The code is perfect: no memory leaks, the thing never halts or crashes or segfaults, all the functions return what they should given what they are.
I think the popular definition of "bug" tends to catch too many fish, in that it seems to include all the behaviors a computer has when the user "didn't expect that output," what a more technical person might call a "misfeature." TFA outlines a working pattern to avoid coding errors, not user interface burps -- like for example, giving a yes/no result for a biometric scan, when in fact it's a question of probabilities and the operator might need to know the probabilities. Such omissions (the end user would call this a 'bug'), are solved thru good QA and beta-testing, but TFA makes no mention of either of these things, and seems to think that good coding is the art of making sure you never dereference a pointer after free()'ing it. It does mention formal specification, but that is only half the job, and alot of problems only become clear when you have the running app infront of you.
Discussion of TFA has its place, but it promises zero-defect programming, which is impossible without working with the users.
Re:Not unlimited funding (Score:2, Insightful)
Re:economics (Score:3, Insightful)
No, that's not it either. Bugs happen because the people who buy the software do not demand bug free code. I do write software for a living. When the customer demends bug-free software he gets it.
I've been around the building bussines too. when I see por work there, say badly set tile, I don't blame the tile setter so much as the full ideot who paid the tilesetter after looking at his poor work.
Re:nearly unlimited funding (Score:3, Insightful)
The key point is this: small teams. It's a lot easier to find the people who can produce 10x better (in terms of rate of writing, clean/bug free code, whichever metrics you care for) when you need to find 3 or 5 or 10 people. You can't staff a whole large application development project with the best gurus: there aren't enough out there in the world.
Re:Paper doesn't mention open source model (Score:1, Insightful)
You're wrong. Open source doesn't mean "let the world see". It means "let the people using the code see".
Military people are particularly fond of open (to them) source (or binary objects so simple that a disassembly is readable), just as they're fond of having complete design specs for their artillery. It doesn't mean they tell "Teh Enemy" the source, just as I am under no obligation to disclose the source of modifications I've made to the linux kernel to anyone other than those I give copies of the modified kernel to.
Thinking "Open Source" means "openly downloadable by everyone on the planet" is the #2 mistake I see closed source weenies making. (#1 is thinking open source means anyone on the planet can openly UPload to open source CVS repositories. That is such an idiotic notion, I don't know where to begin with them.)
Anyway the military will completely ignore I"P" laws if it suits them (But hey, I"P" is really bullshit...).
Paper doesn't mention the irrelevancy of OSS model (Score:2, Insightful)
The Praxis approach starts with the mindset that bugs are bad and shouldn't leave the room. Open-source has much looser requirements. Bugs can be released with the attitude that "a thousand eyes" will catch the mistake. Also unmentioned by OSS advocates is the problems caused during the duration of the bug being unfixed, till those eyes catch it (which isn't guarenteed). Also OSS isn't guarenteed (read the disclaimer, and remember the "/." story awhile back about programmers being held liable for problems with their code. Read the responses) while Praxis is. And last Praxis plays in a field that OSS doesn't. (mission/life critical)
Re:Well Duh! (Score:3, Insightful)
That's pretty much what I was thinking - that this company's results are not especially due to their methods, but due to hiring highly-skilled developers who know what they're doing and care about doing it right.
Half of the "enterprise" applications I've worked with were built on a foundation of absolute shit - too many elements of their core design were based on flawed thinking. No amount of money and time would have let their developers make them work properly.
My current opinion is that that kind of software is made by people who capitalize on the bureaucracy of corporations. I don't control what product is purchased here, so the salespeople for OverhypedFlashInThePanSoft only have to make a businessperson - rather than an engineer - think that their software is what they need to buy.
Re:Flamebait this! (Score:3, Insightful)
When you are talking about an air traffic control system, you can set a very specific set of requirements. The Air Traffic Control system will never have to open an Excel worksheet, or run Quake 4, or be compatible with hundreds of other vendors tools. The Air Traffic Control system will never have to deal with someone swaping graphics cards and updating drivers. It doesn't have to worry about spyware and root kits. It doesn't have to worry about internet access.
If you want to rag on MS, go for it, but don't think CbyC is the answer. It would only result in an OS that you wouldn't want to use. (As a consumer it would be worthless, but it could be great for imbedded systems)
-Rick
perception sells, and failure is seldom mentioned (Score:2, Insightful)
It's like all the suits who love to say "failure is not an option," but then we see the occasional failure, but people still say "failure is not an option" because it's the attitude they're trying to convey, not the reality. The right attitude will bring about the right reality, or so management would have you believe.
Re:PRODUCTIVITY? (Score:2, Insightful)
Measuring lines of code added per day causes deletions and modifications to be considered *bad*. If I add 10 lines today and those 10 lines allow me to delete 20 lines from last year then I have a net productivity of -10 LOC and I have been unproductive.
The argument *could* be advanced that if I'd done it right in the first place the original 20 LOC that I'm deleting should have been the 10 LOC I added today so I *should* be penalized for doing poor work last year. I consider this argument to be shortsighted. What if we're interfacing with a new library? Is that a bad thing from a productivity perspective? What if adding a new feature involves rewriting 10 existing lines and interspersing 10 new ones? Is that less productive than tacking on 20 lines today?
Counting lines added, lines changed, and lines modified as a metric is nearly as bad. The reason these metrics fail, IMHO, is that lines of code have no "average" value. Some lines are more valuable, some are less valuable, some have no meaning in the context of others, some have so much value that they cause others to be obsolete, etc. Grouping them together is like measuring the intelligence of a room of people by adding up the number of people present - meaningless.
Even your point, that a single line of code implies a specific amount of background work ("heavy addition non-programming work") is fallacious, IMHO. Does each feature have equal merit? Equal difficulty? Does each line of code imply exactly the same (or even *about* the same) amount of "heavy addition non-programming work"?
This is certainly not true for me - the difficulty of the code I write varies wildly, both within and between applications. I can assure you that I would likely expend much more effort on each line of code for an optimized backend search engine than I would on the CGI for the web interface that drives it. Is it therefore less productive for me to work on the backend because I expend more effort per line of code?
IMHO, LOC is only useful for publishing statistics, not for measuring meaningful changes in productivity.
Re:Paper doesn't mention open source model (Score:3, Insightful)
There's another factor though (Score:3, Insightful)
Much harder if you are making something that has to run on a massive set of arbitrary hardware that can have any number of other, quite possibly buggy, apps running and that can recieve all kinds of bad input through all kinds of different channels.
That's part of the problem I see is that people look at systems that are engineered and controlled by one company, and then think that software that runs on comoddity hardware should be as reliable as something where everything is carefully controlled.
I have to wonder.... (Score:5, Insightful)
1. Why aren't schools teaching this methodoly thoroughly? Why aren't this toolset and programming language taught in school by default? I learned a bit of Ada at school, but that's only part of a comparison between programming language design. So, are schools to be blamed? Or those profs don't know better? Why aren't proper engineering methodologies emphasized?
2. Someone developed a nice methodology, with a nice toolset and programming language, and got greedy and made it too expensive to acquire. Other tools are good enough, and the resulting softwares are acceptable to the market, so, this nice thing never got widespread use.
3. Programmers are asked to do the impossible. We (I include myself here) had to work with customers who don't know what they want, only give very fuzzy requirements (Praxis's customers, from their list, are different kind of animals, and they probably know better than most of the customers we had to work with), and even if we lay out the whole detailed plan in front of them, they still don't know what they want. They will agree to the plan, sign and approve it, and until you have completed the whole system according to the plan, they would ask to redo the whole thing. If a customer dares to ask a civil engineer to add 2 more stories between the 3rd and 4th floor after the custom-built building is finished, guess what would the civil engineer say? Programmers are asked to do this all the time (I know I have been asked to), so are customers to blame? You can't get the system done properly if requirements are shifting all the time.
4. Programmers are a bunch of bozos who know shit about proper engineering. Yeah, I can take the blame, I've been programming for over a decade, and I know how programmers work: methodologies are for pimps! If a bridge engineer can't tell or prove how much load the bridge can take, I'm sure people would tell him/her that s/he has no business in building bridge.
5. Customers of packaged softwares would buy a buggy software to save one buck anyway, why would vendors put extra efforts and costs to make it better? Look at the market, a lot of good softwares didn't survive, and sometimes, the worst of the line prospoered (no naming here!). So people get what they asked for.
6. Customers (even custom-built projects customers) are a bunch of cheap folks, they would go to the least priced, no matter what. Praxis's customers are willing to pay 50% more for quality work, how many of your customers are willing to? We are willing to fix our bugs, free of charge, for the first 10 years too, if our customers are willing to pay 50% than the market rate for quality work. But so far, I've never met one such customer yet. Granted, I don't work in the defense industry. So, don't blame us for lousy work, if customers try to squeeze out every single buck out of it. And in China (and some other countries too), you have to pay a huge amount for kickback too, sometimes, as high as 80% of the project's budget.
7. Software vendors are a bunch of greedy bastards, they put buggy softwares on the market, without having to accept any responsibility (just read your EULA!). Industry problem or government problem? Not enough regulations (for safety, for useability, etc)? Other industries seem to do ok, e.g. medical, civil,
8. The indsutry is developing too fast, people are chasing the coolest, hippiest, most buzzword-sounding technologies. No one gives a shit about "real engineering". And there are simply too much to learn too, in how many industries can you say people are required to master that much technologi
Re:nearly unlimited funding (Score:3, Insightful)
But if companies are going to just throw up their hands and say "we can't hire competent people, there aren't enough of them in the world, they only doom themselves to a continued shortfall in talent, and an increase in buggy software.
What they really meant was "we can't hire competent people at the same prices as Jimmy the mounth-breather." Take that as you will.
Rules of thumb (Score:4, Insightful)
In the end, software companies are in it for the profits. They have no lemon laws to respect, they have no trades description act to obey, no ombudsmen to answer to, no consumer rights groups to speak of, no Government-imposed standards certification and virtually no significant competition. Customers are often infinitely patient and completely ignorant of what they should be getting - the machines are like Gods and the software salesmen are their High Priests. To question is to be smote.
Were standards to be mandated - perhaps formal methods for design, OR quality certification of the end result, you would see no real impact on net software costs. Starting costs would go up, but long-term costs would go down.
Nor would you see any serious impact on variety - if anything, there is a greater range of car manufacturer and design today than there was in the 50s and 60s when cars had the unnerving habit of exploding for no apparent reason.
What you'd see is a decline in stupid bugs, a decline in bloat, an increase in modularity, possibly a reduction in latency and a move from upgrades to fix things that SHOULD have worked in the first place to enhancing things that can be relied upon to CONTINUE working fter the patches.
Money would not be made by selling the same product with a different set of defects to the same market, money would be made by always going beyond last year's horizons. The same way most manufacturers, from cars to camping gear to remote control aircraft to air conditioning units to microwave ovens to home stereo manufacturers have all been doing - very successfully - for a very long time.
The IT industry isn't going to change in the foreseeable future, the only way we'll see change in our lifetimes is if it is imposed on the Pointy Haired Bosses. We could easily see 99.9% reliable software, with no additional cost, in our homes in a year, with the lack of constant fixes actually saving money. And that's why it won't happen. Not because the IT corporations are mean, thuggish and ogreish - they are, it just isn't way it won't happen.
It won't happen because they're geared both towards the profit motive and towards the outdated notion that the market is tiny. (That last part was true - in the 1950s, when entire countries might have three or four computers in total, operating in two, maybe three different capacities. You can understand a desire to go after the after-sales service, when there simply isn't anything else left to do.)
Today, Microsoft's Windows resides on 98% of the desktop computers, but because of the support system needed to run the damn things, 98% of the world's population didn't have significant access to one. Ok, putrid green is a lousy colour, but the idea of clockwork near-indestructible laptops that - in theory - could be built to weigh 5 lbs or less and run high-end, intensive applications is beginning to filter through to the brain-dead we call politicians.
You think someone in the middle of Ethiopia who is fluent only in their native tounge is going to want to pay for telephone technical support from someone in India, in order to figure out why their machine keeps locking up?
When computing is truly available to the masses (ie: when even a long-forgotten South American tribe can reasonably gain access to one), the ONLY way it can be remotely practical is if said South American can look forward to a reliable, usable, practical experience where all usage can be inferred from first principles, and where NO software service calls are required to get things to work, ONLY required to get more things for working with.
Related reference page about tests (Score:3, Insightful)
Re: Not unlimited funding (Score:3, Insightful)
Do I get to strap them down, put bamboo splinters under their fingernails, and inject them with truth serum?
There isn't much that you can do when the customer is uncooperative and doesn't want to get involved or admit their ignorance.
Re:productivity around 30 LOC per day (Score:5, Insightful)
In as much as a civil engineer depends on control factors via refusing customers who demand that the building have 6 stories not 4 just one month before construction is due to finish, yes. Real world engineering makes certain demands of the client. Someone who wants to build a treehouse for their kids doesn't consult an architect and a civil engineer, and civil engineers don't take contracts from people who refuse to set out some limits on what they want built, and what they expect of it.
Praxis uses solid engineering. Their "Correct by Construction" approach is solidly grounded in axiomatic mathematics and uses similar sorts of formal calculations and logical and mathematical proofs as you might expect to see from civil, electrical, aerospace, or ny other kind of engineers. Take the time to read sample chapters [praxis-his.com] from the SPARK book to get an idea of exactly what they are doing. There is very definitely quite solid engineering going on.
Re:the old axiom applies (Score:3, Insightful)
Re:nearly unlimited funding (Score:2, Insightful)
If there was demand for programmers of the caliber you mention and companies willing to pay salaries deserving of such abilities there would be more people studying towards such a position.
Although you can teach a body all the skills necessary to program, you still need a certain level of competence that doesn't come from education. I could easily (with time) teach plenty of people I know how to program, but only a relative few will ever be able to actually invent on their own... Invention and innovation both being qualities I require the presence of when dubbing someone a guru.
On top of that, most of the people who have such intuition and innovative qualities are generally located in a field that doesn't offer them much pay. Mathematics, physics and logic - to name a few - historically don't offer too much in the way of funds. These people love their jobs. It's not about the new car and the trophy wife, it's about discovering a new flag protein or integrating a difficult formula... It would be bad business for a company to pay more... They just don't have too.
But what does the customer need? (Score:4, Insightful)
Screw specifications. Nobody has them anyways.
Give me a clear, predefined spec, and I'll meet it. I'll guarantee bug fixes,too.
But that's not how software evolves.
Despite careful attention, despite voluminous meetings, emails, and specifications, I never get a clear idea what the client needs me to develop until AFTER a prototype has been built.
In fact, I'd wage that there's a quasi-quantum principle at work: You can either work towards the customer's actual needs, or the predefined, agreed upon specification/costs/specifications. Answering either means ignoring the other.
Consider this the Heisenberg Uncertainty principle. The software is half-dead, half-alive. Either it meets the needs of the customer (and associated scope creep, bugs, ets) or the originally defined specification. Releasing the software defines whether the cat is dead or alive.
It seems that:
1) People will commit, in aggressive fashion, that they need something until they get it, at which point, they'll angrily point out all the flaws in it.
2) People don't actually know what they need until they see that what they have isn't it.
3) When you take anything produced because of (1), and then compare that to the feedback produced by (2), you end up with cases where the code is producing a result unexpected in the original design.
These are called bugs.
4) The only intelligent way to proceed with (1) and (2) is to consider software an iterative process, where (1) and (2) combine with (3) and lots of debugging to result in a usable product.
Re:Here, here... (Score:2, Insightful)
This is completely opposite to the state at the moment, where people assume perfect hardware and buggy software; it should be "hardware that can and does fail" and "software that expects this and deals with it" because software doesn't age or have different faults depending upon which copy it is.
Re:Bugs and Beta testing. (Score:2, Insightful)
A lot of theses errors are becouse software is designed by programmers and not by the users.
Part of my job is spefifying new functions in the web applications of my company. The normal flow of information is from me as business user to the software designer to the coder and then I have to give my comments on the finished product.
More often then not,the first round we have to srap the implementation of the new function and start over again. Not because the coder didn't do a good job but because the designer didn't have a clue what my spêcifications meant or how they should be implemented.
How can I explain to somebody who doesn't know anything about my field of work what I really want. Whenever I speak with him, or her, I catch myself going in "stupid mode". I will try to dumb everything down and leave out the exeptions and finer points as I would with some new trainee.
The only times we were able to get something working from the first go was when I could speak directly to the coders with the designer acting as a moderator.
Personally I found it a lot easier to talk to them too. I would just tell them where I wanted things to be and how the program should react and the coders could give me immediate feedback if it was possible or not.
When I had to talk to the designer I would never know what I would get back - I would tell him something like : "in this table exchange the column shipment number with the column delivery number" and he would come back saying that it was impossible to do so they replaced the delivery number by the transport order number because according to him that would solve my problem - which it didn't.
Re:nearly unlimited funding (Score:5, Insightful)
People differ in ability in every field; the bell curve is real, and only the people who are at the high end of the curve can be considered one of "the best gurus". They will never constitute a large percentage of the group. Ever. Furthermore, there is usually a huge difference in performance between people who are in the top 10% of their field and those who are in the top 0.1% of their field. Most people would consider those in the top 10% as "the best gurus", but really it's only that tiny segment at the very top who deserve the appelation. Even then, you can expect a marked difference between those in the to 0.1% and those in the top 0.01%. Fact of life, folks.
Anyone old enought to remember AD/Cycle? (Score:2, Insightful)
IBM had a major intiative back in the mid-1980s called AD/Cycle which was tied to SAA (System Application Architecture) which was based on these and similar ideas prevalent back then. This is the old "holy grail" and an attempt to fix the waterfall methods of development, which had actually been since the early 1950s with mixed success in delivering software on-time and in-budget.
AD/Cycle involved not only IBM but a number of "AD/Cycle partner" companies like Bachman, and KnowledgeWare. KnowlegeWare's CEO was the former scrambling NFL quarterback Fran Tarkenton. A google of "fran tarkenton knowledgeware" will turn up references to Jim Martin, as well as some interesting things about how the company ended up.
An incredible amount of development money went down the rat-hole chasing the AD/Cycle dream.
The problem turns out to be the difficulty, if not impossibility, of creating rigorous specifications which produce useful results in the face of problems which aren't very well understood at the outset. The less the requirement is for a "black box" with well-defined inputs and outputs the more this is likely to be the case.
Many problems turn out to be wicked that there is a feedback loop between the implementation and the requirements. A classic book from the era was Wicked Problems, Righteous Solutions (http://www.amazon.com/gp/product/013590126X/102-5 477977-4320940?v=glance&n=283155 [amazon.com])
Which might be considered one of the old testament texts pointing to today's "agile development" movement.
A non-software example of a wicked problem is city planning in which implementing changes in the road network, housing developments, shopping center location etc. all change the requirements for the same aspects.
Many wicked problems come from "requirements" which often do (or should or must) come from users. Often, the real requirements aren't known until an implementation is given to the users, who then might say, "yup, you implemented exactly what I asked for, but now that I see it, here's what I really want." Or, "Now that we've added this other thing (application, system, business division) why, doesn't this (work more like/interface with/replace/...) that."
Faced with this, a methodology based on "correct" construction from "rigourous" specifications simply moves the problem to debugging the requirements.
Until we do away with the need to change/adapt systems to changing/evolving requirements, which would likely involve eliminating users, this approach will have limited applicability, and will need to stand beside other more widely used incremental development models.
Re:nearly unlimited funding (Score:3, Insightful)
As you can see, even the simple 10 PRINT "HELLO WORLD" isn't bug free. To make bug free software you don't need to catch most errors, you need to catch every possible error.
Re:nearly unlimited funding (Score:3, Insightful)
Or on the other hand, "We could hire three college graduates, who will then do three times as much useful work in three times the amount of time required by the old guy with total experience greater than the three graduates put together!"
Just because guys get old, doesn't mean they stop learning. Good ones are always updating their skill set.
Re:nearly unlimited funding (Score:3, Insightful)
Maybe we could start by not assuming that CS grads should be going into industry. CS programmes should be teaching Computer Science - you know, the stuff that prepares you for a career in research. Industrial coders should be going through a software engineering program, in which they learn how to apply the results of scientific research to practical real world problems.
Just as with other sciences and engineering disciplines, there will likely be a lot of overlap in subject matter. But the fundamental focus is entirely different: there will be material covered in one programme that isn't covered in another, and the perspective taken on the overlap material will be quite different.
Sadly, many developers are graduates from CS programmes instead of engineering programmes, and too many of the "software engineering" programmes out there have little resemblance to the engineering training found in other disciplines - which is, IMHO, one of the reasons the world of industrial software development is so screwed up.
Re:nearly unlimited funding (Score:3, Insightful)
But if companies are going to just throw up their hands and say "we can't hire competent people, there aren't enough of them in the world, they only doom themselves to a continued shortfall in talent, and an increase in buggy software.
Part of the problem is simply that the higher you draw the threshold, you start to get into areas of natural talent which seem to be more difficult to train. But an even bigger problem is the problem of identifying the people with the highest skills. When you're staring there looking at a resume, the best and brightest with the skills to write the most reliable code don't have resumes that look much different from everyone else's resume. Usually programmers can spot which other programmers have abnormally high skill in this area after working with them, so you can find these people by word of mouth, but this doesn't get automatically translated into resume content which can allow you to pick the correct employee out of the crowd.
My best advice to anyone reading is, if you think you can code like this, and you want to seek out a high salary job based on your unique skill, then you can try proving your skill by releasing open source code which demonstrates this fact. This could give you a boost, but not always one comprehended by the people responsible for hiring at all locations.
Re:nearly unlimited funding (Score:4, Insightful)
What's the quickest way to paint the roof of the Sistine chapel? Are you going to be able to hire 30 artists with enough talent, or should you stick with the one that is qualified and a couple assistants and just wait a few years?
Can you train 30 artists to be good enough to do the work? How about 300?
After a point, being a super-coder is just as much of an art. You won't be able to produce these people, it's kinda in their soul. Great musicians pick up their first instrument and know it's what they are going to do--what they are made for. My guess would be that if you have had access to a computer for over a year and you aren't coding yet, you'll probably never be a really great coder--a real computer artist couldn't have resisted.
Hmm, maybe a better word that Guru or Architect would be Computer Artist or Code Artist? It should convey the relative rarity much better.
This should be obvious. Every other art has it's gurus, and they are usually the top 1%, 99% of the others in the field simply will never be able to do what the gurus do, regardless of training or experience. I'll never play the piano like a savant that started at age 3, period.