Can Software Schedules Be Estimated? 480
"A recent academic paper Large Limits to Software Estimation (ACM Software Engineering Notes, 26, no.4 2001) shows how software estimation can be interpreted in algorithmic (Kolmogorov) complexity terms. An algorithmic complexity variant of mathematical (Godel) incompleteness can then easily be interpreted as showing that all claims of purely objective estimation of project complexity, development time, and programmer productivity are incorrect. Software development is like physics: there is no objective way to know how long a program will take to develop."
Lewis also provides a link to this "introduction to incompleteness (a fun subject in itself) and other background material for the paper."
Sure they can... (Score:3, Interesting)
And the remaining 5% of the project takes another 95% of the time.
Re:Software Schedules (Score:2, Interesting)
As long as it takes to get it right. This to a point is a barrier opensource S/W does not hit to a large extent as development is continual till no longer required.
An interesting question is when will Linux/*BSD development stop? Will it be surpased by an/other projet(s) or evolve to perfection?
Right place to Ask (Score:2, Interesting)
Anyway my personal theory based on blind idealism is that it is extremely difficult to get an estimate for completion right; short term goals are fairly easy to predict, because you have most of the information you require to make those predictions, but longer term estimates are much more of a wild guess. I personally thing its a consequent of chaos theory - a butterfly flutters its wings in Brazil and your software project instantly takes another two years! More seriously small errors in estimating components of a large project can induce large errors in estimating the time and resources needed to complete the whole project.
Linux is right with its "release when ready" motto. Since it is impossible to tell when it will be ready over such a wide range of groups and interests, you have to pick your release moments when they happen, not try and force them to happen.
Slashdot readers are students? (Score:3, Interesting)
i've always thought most
-c
Re:Estimates based on motivation (Score:3, Interesting)
A key to fixed-cost is that it takes practice. Try it on a small scale before you commit to it on a larger scale, to avoid large-scale failure...
Re:from a Consulting viewpoint.. (Score:5, Interesting)
The bulk of the work of programming consists of getting all the complexities and nuances down cold. Once you really and completely understand what is required, coding is trivial.
This leads to a thoroughly unrealistic method of estimating software costs:
1) Work for months on the specs.
2) Get the customer to sign on to those incredibly detailed specs, even though he doesn't understand them.
3) Go and code it, no spec changes allowed.
8-)
The article mainly talks about the mathematics of estimating complexity. This is a lot like the proof that you cannot determine when or whether a computer program will end -- it's true for pathological programs, but it has little relevance for the real world. You try to write the code so the conditions for the program to end are clear. If it gets into an endless loop, you probably got a conditional expression backwards and you'll recognize it immediately once you figure out which loop is running endlessly... Likewise, there may be well-defined specifications for which it is impossible to estimate the coding time, but the usual problem is poorly-defined specs, which obviously makes any estimate a guess.
Re:In all seriousness, this is the wrong place to (Score:4, Interesting)
Slashdot Reader != Slashdot Poster (Score:3, Interesting)
That aside, my experience in software development (only 3 years) ball parking (1-3 days, 1 week-3 weeks, 1 month-3months) is usually possible, but tends to become wildly inaccurate beyond a few months. Regardless of what methond we use to determine timelines, some things always seem to slip, while others take a fraction of the expected time.
Re:Of course they can be estimated. (Score:3, Interesting)
The software industry doesn't hire anyone. Software companies hire people, and a company that behaves like you described won't be around for long if software is their main source of revenue.
Also, management != good software engineering. Planning != good software engineering. These are all factors that go into a good software project but people shouldn't think that if they draw class diagrams before they start coding, they're suddenly software engineering.
On the other hand, you need to look at what's best for the project - it isn't always a large, formal approach to software, especially for small projects. Being too rigid can be as bad as being too loose with your design. I've seen projects design themselves into a corner before the first line of code is even written.
Software like a factory (Score:3, Interesting)
1. Discipline. Your average programmer will have read about various programming methodologies, but skipped past the parts which would make their code an easy-to-reuse template in lieu of fast development time. As with any gamble, you should know at exactly what point you want to quit, have an A-line for version 1.0's feature set, all that jazz.
2. A big code base. Because of step 1, or maybe just a lack of previous projects, one's code base is typically limited to what you can find in a computer science textbook. Having a good database of classes and patterns that have turned out to be useful, and having easy access to this database for the information you need is the difference between a library and a code base.
3. Incremental development. Throwing together a large software project, all at once, and then testing the whole thing is very tempting, and happens more often than most people like to admit. What should be happening is a series of incremental integrations into the final product, with unit tests of each part. Otherwise your large project can become a giant, complex nightmare. Making complex software shouldn't be made quite so complicated.
While making a "software assembly line" takes slightly more work and trouble than your average car assembly line, it has incredible cost savings in the long run.
Politics (Score:2, Interesting)
After the October date was missed there was a meeting of all the Project Managers, Program Managers, Subject Matter Experts, and other people involved in the project. They worked round the clock for nearly 3 days to come up with a revised project plan and estimate of how long it would take to finish the system, test it, and bring it online. The number they came up with moved the date into late January. Executive management didn't like this and decided that the new date would be mid-November. Their plan for squeezing out these extra two months? Just make everyone work harder. Needless to say we've got a group that is completely burnt out and getting less done in more time. Nifty.
As long as "suits" continue to make schedules based on business needs (read "corporate politics") and not based on the complexity of the problem this is going to continue to happen.
--john
When constants are constant and when they aren't (Score:3, Interesting)
There is nothing wrong in principle with measuring what has happened in the past, and using that to predict what will happen in the future, before you discover why it works like that.
For instance, if you measure that throughout the year, the average time between sunrises is 24 hours. You can use that number even though the only explanation for it that you might have is "it seems to work"
Of course, when you apply this to software develpment time estimation, it falls down for a number of reasons. It's not constant across technologies. It's not constant across types of project. It doesn't take into account the variation in technological risks (ie if you have done something like this before, you will spend less time finding ways to do stuff). It doesn't scale linearly with the size of the project. It varies across individuals. etc. etc.
Unknown? (Score:2, Interesting)
I thought that's where GoF patterns could help. When I've been asked to explain design patterns to PHBs, the analogy I've always used is structural engineering - eg. for a bridge, we could have: box girder, suspension, cantilever etc. Design patterns are just like that.
Of course, in the real world, this is only a partial solution. Over 90% of software project failures are down to requirements. If we could get that right, then software development could, indeed be a "proper" engineering discipline. The only place it is, though, is where people are prepared to pay what it takes to get it right - flight control systems etc. IIRC, one of the few people to have achieved the SEI CMM [cmu.edu] level 5 are the lot who develop the space shuttle software. At the last count, their code was costing them over $1m a line. How many people would put up with what that would do for the cost of their text editor?
Components (Score:4, Interesting)
Yes but. The important components of a skyscraper are steel beams. Put them up correctly, after calculating loads and stresses, and it doesn't matter what the twenty tons of stuff you have sitting on the 27th floor is. It doesn't matter if the beams come from different foundaries, either, because the specs are clear enough (dimensions, strength, where the bolt holes are).
Now try putting together a typically complex business software solution, meshing a bunch of different, reasonably good, existing programs and components with some custom code and configuration. Even where there are reasonably good standards spec'd in some areas of the project, if you're not solving new problems it shouldn't be a software engineering project at all - it should just be system administration using the available solutions. That it's real software engineering means you're running into unpredictable surprises where the components at hand don't fit without a great deal of extra labor.
A parallel can be found in work on the portions of the New York City infrastructure that are under the streets: We still have wooden water mains in some places from the mid-1800s, mixed with gas, electric, steam pipes, sewer, subways, gas lines ... most of which was not documented to current standards on either installation or subsequent changes, despite most of it being reasonably well done by the standards of its time (pretty amazing, those wooden water mains still working, right?).
So what happens when we finally go in to improve one of the services - say, lay new water mains? Other stuff is found that's in the way where you didn't expect it, or that need's fixing on examination when you didn't expect it. Meanwhile you've got the street ripped up but you have to cap it again quickly or traffic is too snarled for too long. So a single block's 4-week project can stretch out for over a year - dig up the street, fix one problem, discover more, recap while designing and provisioning the next stage, repeat - because it's all stuff that needs to be done once you get into it, that can't be properly assessed until you get into it.
Well, software in the real world isn't as old as New York, but if anything it's more complex, and the layers of crufty stuff that have to be accommodated in current projects are as considerable, and often as poorly documented by current standards (which will always advance so as to obsolete whatever we do now). Building a skyscraper, by contrast, is just a sysadmin job. Put the beams and bolts in the normal places, and it stands.
Theory vs Reality (Score:2, Interesting)
In Theory:
- All your resources are available to you when you need them for the length of time you need them.
- The client is with you all the time so that they are available to comment on the direction development is going.
- An enormous amount of time is spent in analysis to make sure the project goes in the right direction.
- Every task is estimated and ranked and put into a timed, development iteration schedule. If time runs short for a specific iteration then lower ranked features are "descoped."
The idea is that you have a fixed budget and a fixed end date and that based on these the one degree of freedom is the scope of the project. Therefore if anything changes it is the number of features.
In Practice the theory is adhered to closely but other factors enter into the project like:
- Scope Creep. This involves features that were ranked lower in the requirements and were descoped but become necessary for the end product to be useful or features that weren't caught by the requirements process but are necessary for the end product to be useful.
- Requirements Interpretation. They were nailed down, or so we thought.
- Budget. If the estimate comes in for 4 developers and a lead for 3 months but the budget only allows for 2 developers and a lead then there's an issue.
- Resources. If the client can't or won't provide the resources you need to extract the inputs you need from other systems then your schedule will be thrown for a loop.
- Client Participation. Asking 100% of you client's time in the project is an enormous request. And not always do-able.
How could it have been improved?
- The client could have provided the resources we needed. We were extracting information from some host databases and had a hard time figuring out what fields, rows and tables we needed.
- Our BA's could have done a more thorough job on the requirements. There were things that were missed or weren't defined accurately enough. We developed integer benchmark times when two decimal places were required.
- Our client could have sat with us to make sure what we were doing was what he wanted (which was what was originally agreed to). Nothing quite like having the client say that a particular feature was not quite what he wanted.
- Us developers? Well, there are always things that could have been done quicker in hindsight. I did some java-scripting that - in retrospect - could have been a hell-of-a-lot more efficient. I aim to correct that when I get a momemt.
- The function estimates were off and that caused some late nights and freaking out. It really is an art form.
Overall, the model was nice but our lack of adherence to it caused us unnecesary grief. While the client got a product he could use the process would have been more satisfactory and less painful if we hadn't strayed.
The lesson is that theory is all fine and dandy but it doesn't work if you don't follow it.
IMHO, as per
J:)
Software can be scheduled... (Score:4, Interesting)
Painless Software Schedules [joelonsoftware.com] is a great one and you will get sucked in just following the links from this one essay.
Reductio (Score:3, Interesting)
Next get together a team of programmers. Set them to work on a program which determines proves {insert your favorite unsolved mathematical conjecture here}. It turns out you actually don't need the team at all, just run your software project estimator and if it comes out with a finite amount of time to complete the program, you know that the the conjecture is true.
In other words your software estimator can be used to solve the halting problem.
OK, this is a joke, but it points something about the question. I once had a CS professor who required that we right requirements statements for all of our assignments. She forbade us to include halting times, because "you can't predict whether a program will halt or not." To which I wanted to reply, "About that 'hello
The lesson is that there are some cases to which a rule like this applies and others to which it does not. There are some projects that can be estimated with simple tools, some that can estimated with complex tools, and some that are not practical to estimate at all. Even fairly seat of the pants kinds of estimates work pretty well on relatively simple problems, providing you break things down a bit and do an honest estimate the costs on individual deliverables and the individual functions you know you'll need to make them work. About the only methods that never work are pulling a number out of the air based on how much the project scares you, or using wishful thinking (whether the source is your boss or you). Nobody can give good estimates when you spring the question on them with no time to prepare. My boss's most (and my least favorite) questions start with "how hard would it be.." and my most favorite (and his least favorite) answers start with "It depends..."
Nonetheless, my experience with past projects of the kind that I do means I can do a pretty good job with relatively unscientific tools, provided the problem is like one I've solved before. However if you are writing software for space flight or some other kind of highly complex mission, I could estimate until I was blue in the face and it wouldn't be worth a damn. You want to hire somebody with experience in such projects and who has methods of estimation well calibrated from similar past projects.
I think the particularly difficult cases are ones inolving software maintenance -- extending software to perform things that weren't originally factored into the design, or adapting the software to run when the systems it depends upon change in some unpredictable way. These are cases where surprises can throw the best laid estimates well off.
Estimates should include debugging (Score:3, Interesting)
* A tester or test suite exhibiting the bug
* Someone recognizing that it is a bug
* Enough data being gathered to define the bug ("It hangs sometimes" or "I don't think the results are always correct" doesn't cut it).
* Enough eyeball hours to find the bug (this in itself makes the process equivalent to solving a crime. Do we ask the cops to schedule crime solving?)
* About two minutes (average) to devise and implement a fix
This has to be done for N bugs, where N is unknown. People who think you can estimate software development schedules with any accuracy are either dreaming or assuming that they just have to estimate how long it will take to get it coded, not how long it will take to get it working correctly.
-- MarkusQ
You can Estimate a Software Engineering Project (Score:2, Interesting)
OK, I'm going to dive into the classic analogy to traditional engineering: the bridge project. Nobody ever answers the question "How long will it take to build a bridge?" right off the bat. Every aspect of the project is scrutinized and estimated separately. In other words, to build a bridge we need to do A., B., C.,
Now, back to the software arena. There is a big difference between a software developer and a software engineer. Software developers "hack" or piece together code that works, but there's been no real analysis done to support it (my definition - feel free to argue). Software developers are comparable to general construction contractors. For example, a contractor may build a deck without much analysis (i.e. how will it behave in an earthquake; what is it's failure temperature, etc.), but a major structure (like a bridge) requires an in depth analysis.
A software engineer, on the other hand, follows a much more rigorous analysis and design technique that can be used to estimate the overall time a project can take. To do this, one doesn't estimate how long it's going to take to build the entire project. Rather, one should divide the task into sub-tasks and continue to do that until one ends up with tasks that are estimatible with a defined region of uncertainty.
To do this, a certain amount of design needs to occur. Admittedly, the estimate for the design can sometimes be a shot in the dark. But, a good design can give not only a good estimate of the time required to complete a project, heuristics about the end product can be determined from the design. IMHO, the coding becomes an afterthought, a footnote to a good design.
OK, I'm done ranting. Start the flames.
Human insight is noncomputable --Penrose (Score:2, Interesting)
The Large Limits paper uses pretty much the same proof, but doesn't add Penrose's assertion that human thought can't be computable, and therefore algorithmic limitations don't apply.
well, yes, but it depends. (Score:2, Interesting)
I am in the process of completing a research report on this very issue. the background is the engineering development project modelling software SimVision [vite.com], which we [dnv.com] have undertaken to modify for use with software development projects.
the answer is yes, but it depends on a lot of things, because programmers are not like other kinds of engineers and software engineering is not like other kinds of engineering, to wit:
it seems that managers improve their estimating skills by experience, so using experiences managers is a good tip.
there's a lot more to it than this of course. unfortunaltely our report is confidential just now.
Software Development==Engineering? (Score:3, Interesting)
I agree with you up to a point. I am an engineer. I have worked in Process Engineering, at AMEC, and now work in Design engineering. I have not done much coding, but I think that software development probably relates most closely to design. As I said, I now work in design. In design you can estimate a schedule, but that schedule is dependant on our everything going perfectly the first time, which we all know doesn't happen. This does also not include problems with parts we have to design around, which we then have to wait on, or a change in requirements of our part. (Sound familiar yet?)
This is all in the conceptual, design phase. This doesn't include the acutal production of a physical part. That all happens later, after our 3D model has been packaged correctly. Once the physical part has been made, then there are the joys of testing and testing and testing...
What I'm trying to get at, is that I've experienced several forms of Engineering (Yes there are many), and I think that Software development relates most closely to Design. In design, there is no reasonable way to schedule out how long things will take. We just make an estimate based on what's happened in the past, and change things as we go along.
Re:Of course they can be estimated. (Score:2, Interesting)
But software development, which the other poster was talking about, isn't necessarily software engineering.
I've been titled "software engineer" (with appropriate prefixes) most of my salaried career, but when I made up my own title as an independent consultant, I went with "software craftsperson", because engineering, itself, isn't the major focus of the sort of software I am usually called upon to develop (operating systems, compilers, generally software-development toolchains).
Of course, I try to improve the engineering-to-black-art ratio of software I work on compared to the "norm", because I believe the engineering approach, when usable, is superior.
But actually calling myself an "engineer" seemed, and still seems, a case of calling myself by a title I respect while not being willing to insist on meeting the standards normally associated with that title.
Generally, I find the industry -- including clients and customers -- prefer "good enough yet modestly expensive and time-consuming" to "well-engineered, way too expensive and never accomplished", which is what estimates produced in the early stages of a project tend to look like for "development/hacking/coding/programming" vs. "engineering" as respective approaches.
And since most of my clients view the software I am to develop for them as merely one component in a large scheme of software, man-power, and so on, it really is up to them to best determine and evaluate their own optimization function and then decide how they want me to approach my work.
Naturally, if I was asked to develop software that controlled life-or-death machinery, I'd demand a higher standard. But the real issue would be, would the client demand such a high standard that they wouldn't even consider me for the work, given my history of working on the sort of software that is widely known to be critically buggy despite decades of industry-wide experience developing it -- operating systems, compilers, text editors, assemblers, linkers, and similar utilities?
Fortunately for me, the free market highly values someone like myself who can churn out productivity-enhancing tools (say, a 5% improvement optimizing a code generator), helping hundreds or even thousands of others make better use of their time and computing resources.
So, whether I could actually engineer something like a FORTRAN 77 compiler for a specific '80s-era computer, I can't exactly say. I'd like to think I could. But nobody ever asks me for that. Instead, they ask for new features, better performance, debugging help, and the like, always involving software that has been (or will be) developed with only a modicum of "engineering" used.
Within that context, my use of "engineering" boils down to using proven software-development and coding techniques in usually-small, specific instances -- in the nitty gritty details of a project -- such as avoiding situations where variations of the same original data are separately entered and maintained, yet not consistency-checked as part of a product validation process (such as during a build). That sort of thing is mainly a matter of saving me some embarrassment when I screw up, plus helping others who'll maintain the code down the road from making easy mistakes that end up being hard to track down.
(And on most projects on which I work, I'm treated as if I'm "going overboard" by most of my fellow developers, who seem to believe that it's okay to spend hours debugging vast, intricate code only to discover the problem is a mere typo that a simple sanity-check could have found in a few milliseconds. Sigh.)
Re:Software Schedules (Score:2, Interesting)
the biggest problem of all, is people with experience in small projects trying to apply that to large ones. Working on a million LOC program and a 20k LOC program simply can not be compared. We can speak of orders of magnitude difference, but unless you've worked on very large systems, you really don't have a clue.
And of course, the vast majority of people who have time to write books and papers work on the small systems.
my time estimation method... (Score:2, Interesting)
take how long you think it'll take, add one, and go up the next denomination of time.
example: 3 days will take 4 weeks, 4 weeks will take 5 months,
it's scary how accurate this is
--buddy
Re:Of course they can be estimated. (Score:3, Interesting)
When looked at in the context of practical experience, this is quite false. We have been building buildings for at least several thousand years with some tremendous success and some spectacular failures. I live in Toronto where we were lucky (I think) enough to have the first major league baseball stadium with a retractable roof. IIRMC, the original cost estimates were in the vicinity $100 million (CND). When the stadium opened (pretty close to on time), the cost was actually around $480 million (CND).
I guess this somewhat proves you can estimate either cost or time accurately but not always both. My experience in the IT industry has shown that most problems can be over come with enough resources. Unfortunately, resources are not limitless and therefore consessions must be made. This generally means the completion date slips or functionality is reduced or a combination of both.
Matching XP and fixed requirements (Score:3, Interesting)
A couple of posters asked this question above: How do we reconcile XP short develop/test cycles with a fixed project plan + bid?
The answer is simple: During the planning and estimate parts we focus on defining the problem domain and a set of solutions for it. We don't focus on too many implementation details.
XP techniques are applied to solving each specific problem found in the requirements. For example, the problem may be something like "how do we decode this math-intensive file the fastest?". There usually are two or more answers to such a problem. First we define an interface, then we try two parallel, different solutions and try both. The one that meets that criteria best wins, and we move on to the next problem.
The thirst for features suffered by some people is often the result of poor design choices in the beginning of the project. If additional features are required, and the analysis was done correctly, you'll find that these new features simply extend solutions you were already working on (or solved). Thus, XP comes to the rescue again by letting you add the new feature without throwing the schedule out the window. Think about it: If a new feature forces someone to re-write a whole system then something must've been overlooked during the requirements analysis phase.
The most important part of this process is not to start coding and testing until the business requirements are clearly defined. We've been guilty in the past of coding before understanding the problem completely; we try to avoid that trap now. That is probably the single most relevant cause of software project delays.
Cheers!
EPaper may be correct, but is irrelevant (Score:3, Interesting)
Now, this paper makes a hell of a lot more sense to anyone who's read Hofstadler's Godel, Escher, Bach, but I suspect that many, even most, Slashdotters have read this one.
What makes the paper irrelevant is that we don't use formal systems to estimate software. We use our own head. We use hunches. We use intuition. These things are informal systems, capable of forms of reasoning that no formal system can achieve. That's what Godel proved.
The paper is saying that you can't take a spec, give it to an estimator program, and have the program write the estimate. You can give the spec to humans who write estimates for parts of it, feed that into an estimator program (like a spreadsheet), and you can get an estimate, but you simply cannot remove the human from the loop.
Several points to be raised -- is it all academic? (Score:4, Interesting)
But herein lies the rub -- exactly how many software systems are "completely new?"
Damn few!!
The average software project in an average industry will be primarily a repackaging of previously solved problems.The majority of integration tasks will be sufficiently similar to previous integration tasks as to be known.
You will be left with a small number of "sub problems" which are unique and new. But now we have a situation where the caveats of the article are very important. Specifically, if we have decomposed the programming tasks to a sufficient degree, it should be the case that the estimation is tractable.
Also, it should be noted, that the author assumes that a good estimate is one obtained through formal methods that is objectively defensible. However, in project maangement, a good estimate is defined as one that is believable and acceptable to all stakeholders in the process. The method for obtaining the estimate is not important.
Moreover, good project management will include some significant up-front analysis. One common (at least common to companies with good PM'ing track records) is to run "monte-carlo" simulations of project work with large variances in schedule-v-actual work. With a run of a few thousand simulations, those processes that are most important to the time and budget performance of the project.
These "key" work packages are often non-obvious without this type of simulation work. However, with a good work breakdown structure and a good simulator, it is possible to generate a reasonably accurate picture of project performance based on what is not known.
This means that in the "real world" of business, the article's claim is irrelevant!!
We don't NEED objectively defined and defensible estimates. Instead we need estimates that the project stakeholders (which includes the people doing the work) can agree to.
We don't NEED our estimates to be generated by formal methodologies. Subjective estimates backed up by years of experience are just as good, and often better, from a planning perspective.
This whole article strikes me as another programmer trying to show how dumb the business people are. Hey folks, good business people KNOW that estimating is hard and that it isn't objective. But just because something isn't objective doesn't mean it can't be done well. It is possible to build models that compensate for unknowns if you can do enough decompossing of the problem to limit the unknowns to a well defined, small manageable few.
So, in the view of this PM, this is all just academic and has no bearing on the real world.
Optimism and ego as a source of underestimation (Score:2, Interesting)
Another reason that developers tend to underestimate development time is that they tend to have very healthy egos when it comes to technological issues. Again, when facing the complexity of modern code and systems, this is probably a healthy defense mechanism.
But when you couple all of this with a management that wants to believe deflated time estimates, it's no wonder that most project end up taking more time than initially thought.
Re:Estimates based on motivation (Score:3, Interesting)
I work with the customer to divide the project up in to phases / steps / iterations / releases / whatever. Group the most vital core pieces together and do them first, at a fixed cost. As requirement change, these changes either go in to future fixed cost releases, or they are done hourly if requested. Thus, the overall project is not fixed, but at each stage the customer knows what they are buying at what price, and does not have the worry of the "meter running".
There is some related explanation (not a sales pitch) about it on my web site:
http://kylecordes.com/story-182-shared-risk-prici
Re:Optimism and ego as a source of underestimation (Score:3, Interesting)
Carleton Sheets, a man who was talking about how to buy real estate on his instruction tapes said something useful which I decided I can use in estimating time requirements for various fixes:
We need to learn to ask for the proper amount of resources and point out that less than the minimum makes it impossible to respond within the requirements no matter how much someone wants it to happen. (As Brooks points out, it doesn't matter how many women you throw at the task it still takes 9 months to produce a baby. Demand the baby be brought forth in less time and you either get a dead fetus (and possibly mother) or a sickly premature baby.) We need to learn that this is not a good idea because if you are consistently wrong on your estimates, eventually you get the "kid that cried wolf" syndrome: nobody believes you any more and all of the estimating systems become what everyone knows they are: a joke. It's actually no wonder "most" projects end up being cancelled. They take too long (because the people who are supposed to implement them were too aggressive in what they would deliver) and cost too much (because they routinely run overtime because the estimate was wrong in the first place).Paul Robinson <Postmaster@paul.washington.dc.us [mailto]>
Two Cents From a Project Management Lifer (Score:3, Interesting)
If the business requirements have been properly defined and management discipline exercised to keep within the original scope, every estimate I've developed -- using a variety of methods over the year -- has been successful. But those instances where the specs continually change, the business requirements are "discovered" along the way and/or new requirements are added to the mix are all failures. This has been true whether I've led teams doing something "no one's done before" or the "same old thing" again.
Kudos to everyone here that has posted information on the REAL solutions in the form risk management, scope containment, good old fashioned discipline, and the like.