Can Software Schedules Be Estimated? 480
"A recent academic paper Large Limits to Software Estimation (ACM Software Engineering Notes, 26, no.4 2001) shows how software estimation can be interpreted in algorithmic (Kolmogorov) complexity terms. An algorithmic complexity variant of mathematical (Godel) incompleteness can then easily be interpreted as showing that all claims of purely objective estimation of project complexity, development time, and programmer productivity are incorrect. Software development is like physics: there is no objective way to know how long a program will take to develop."
Lewis also provides a link to this "introduction to incompleteness (a fun subject in itself) and other background material for the paper."
Software Schedules (Score:4, Funny)
1) As long as it takes.
2) Take your best estimate , and double it and add 5 or something....
It prefer the as long as it takes. Other wise you end up with something like Windows Me.
Re:Software Schedules (Score:2, Interesting)
As long as it takes to get it right. This to a point is a barrier opensource S/W does not hit to a large extent as development is continual till no longer required.
An interesting question is when will Linux/*BSD development stop? Will it be surpased by an/other projet(s) or evolve to perfection?
Re:Software Schedules (Score:2)
EFGearman
Re:Software Schedules (Score:3, Insightful)
The MIT Method is to take as long as needed to get a task done "right," regardless of cost and schedules.
The New Jersey method calls for solving 80% of the problem, and putting off 20% until later.
The MIT method results in more project failures than the New Jersey method. Microsoft epitomizes the New Jersey method, as does open source. Multics followed the MIT method, and was never actually finished, just killed off years later...
If anyone has a reference for the "MIT vs NJ" in its original form, please post it.
Re:Software Schedules (Score:3, Informative)
The standard multiplier used is PI.
There are also some interesting results of programming speed in the Prechelt's comparison of different programming languages: an article [ira.uka.de], a tech report [ira.uka.de].
One of the conclusions is that script languages such as Python or Perl are about 2-3 times as fast to program with than Java or C/C++, at least in the small projects. The script programs were also about half as long in lines. There were also some differences in the reliability of the solutions - Python and Tcl had a very good score compared to C, although the small sample size for C may give misleading results.
I'd personally be very interested to see better data for differences between C and C++. I've recently been involved in C again after a long pause, and it seems like an awfully risky language to program with. However, it may be faster than C++, on average, and the Prechelt's results agree with this conception.
Re:Software Schedules (Score:3, Funny)
Re:Software Schedules (Score:2, Funny)
Everything takes longer than you expect, even when you take into account Hofstader's Law.
Re:Software Schedules (Score:4, Insightful)
The thing is, you must get entirely through the design stages first. The design stages should include every screen as well as every possible error message, sub-screen, or whatever can pop up, as well as an outline of how the program flow will go. This takes a lot of time, but not quite as much as it sounds.
Once you have done the complete design, you can accurately make schedules. The problem is, most programmers put all error handling and messaging off as something that doesn't need to be designed. That's where the extra time comes in. If you know _exactly_ how the program flow is supposed to work, estimating time is easy. However, if you haven't finished the design stage, YOU DON'T KNOW WHAT YOU'RE PROGRAMMING, so, obviously, you can't estimate the time. So, with a _complete_ design, including all possible error conditions and actions to be taken, scheduling is not that hard.
Re:Software Schedules (Score:3, Insightful)
Others have mentioned "creeping featureism".
There is also the "event Horizon" - When faced with a project of infinite size, people will tend towards an estimate that is based on their idea of how long it takes to solve an infinite problem. For a salesman, this is a couple of days. For a typical manager, a couple of weeks. For an engineer, a couple of months.
For estimates to be meaningful, the work has to be divided into units which you can guarantee will never exceed your event horizon.
I have managed many successful estimates on large (over one year, more than 5 people) projects, based on the method that it needs an average of two weeks to implement, document and test, any feature of the project you can identify before the project starts.
By "feature" I mean explicit bit of behaviour by the code eg "ack an inbound packet", "echo the character on the serial line". I know any amount of people who can code this in 3 minutes in perl or whatever. That is not the same as developing supportable code. All loops have to be unwound, all nesting flattened. Every level of the heirarchy has to be accounted for serially.
Let me introduce Dr Spin's 2:1 Law: Supportable code needs 2kg of paperwork per byte of executable code. Includes minutes of meetings, sketches on envelopes. (Most of it is binned, but it still has to be created).
In all seriousness, this is the wrong place to ask (Score:2, Flamebait)
Right place to Ask (Score:2, Interesting)
Anyway my personal theory based on blind idealism is that it is extremely difficult to get an estimate for completion right; short term goals are fairly easy to predict, because you have most of the information you require to make those predictions, but longer term estimates are much more of a wild guess. I personally thing its a consequent of chaos theory - a butterfly flutters its wings in Brazil and your software project instantly takes another two years! More seriously small errors in estimating components of a large project can induce large errors in estimating the time and resources needed to complete the whole project.
Linux is right with its "release when ready" motto. Since it is impossible to tell when it will be ready over such a wide range of groups and interests, you have to pick your release moments when they happen, not try and force them to happen.
Re:Right place to Ask (Score:3, Insightful)
Likewise, I've been developing (C++) for a living for about 12 years now and I've come to some conclusions:
There are estimating techniques/metrics which will work. They depend upon going round a few times to "calibrate" and consistent application. "Task Points" was a good one - basically break your use cases down and down until you have a series of one-line statements about the system. Multiply these by your magic number and that's the estimate. This, like all estimating techniques, is built on sand because:
It depends upon a development team sticking around long enough to do a few projects to calibrate you method.
It depends upon the exact functions of the system being known at the time you do the estimate. This is the killer.
I have never worked on a project where the exact functioning is known at the time coding starts. I have, however, observed that the more analysis/design you do before estimating, the more accurate the estimate is. The problem is, that people always want the answer (estimate) before they've given you the problem (spec).
FWIW On small projects (which are generally better defined), I run through the spec, do a rough n' ready count up of the number of classes, multiply by a factor (decided by the complexity of each class and who I think is going to code it) add a QA+debugging allowance and come up with figures which aren't too wide of the mark.
Oh yeah, and the "who's coding it" is important. Lots of studies show that the difference between "good" and "bad" coders can be a factor of ten. I've been slammed by PMs after estimating how long something would take me, then the PM puts some "cross trained" ex VB dork on it.
To summarise: it is possible if you know who is coding what. Recommendations: 1) read Brooks, 2) keep it small 3) ignore any of the "latest methodologies" that Project Managers try and sell you.
Slashdot readers are students? (Score:3, Interesting)
i've always thought most
-c
Re:Slashdot readers are students? (Score:3, Funny)
Re:In all seriousness, this is the wrong place to (Score:5, Informative)
Fixed specs are much easier to engineer than those that continually change. You wouldn't easily engineer a bridge if the river banks kept moving.
I think experienced project managers know how to control the spec rather than the project. (I could be wrong - It's just what I've seen).
Re:In all seriousness, this is the wrong place to (Score:4, Interesting)
Re:In all seriousness, this is the wrong place to (Score:3, Informative)
There also seems to be a professionalism problem in software development - programmers often deviate from the project spec to add things that they want to add, just because its fun for them, with no regard to the impact on the deadline or whether or not the feature is required and/or even useful for the project. Project deadlines for bridges would also often slip if some of the engineers kept deciding halfway through that it "would be cool" if the bridge pillars "looked like giant penguins" or something. "Real" engineers have the professionalism to realise that they need to stick to the spec. With software its not quite so clear that you absolutely have to, so (unprofessional) software developers spend too much time near the beginning of the project adding fun, cool, useless things instead of concentrating on what needs to be done. Then for the last two weeks before the deadline SOMEBODY ELSE (usually me) usually ends up picking up the slack and working 16-hour shifts to get the program ready for delivery.
I keep having fights with one of the developers here, who is a good programmer, but he has *no* concept of deadlines, time, or priorities. Even the *management* have started multiplying his development time estimates by a factor of three (its usually the other way round!). He's always like "I'd like to add this", or "it would be really cool if we had this feature", or "but we're going to need this eventually anyway" (for future future projects that don't exist yet). And its always "it'll take less than a day", or "it'll only take a day or two". And it ALWAYS takes several times longer than "a day or two". And these things add up, he just doesn't see it, a few days here and there soon add up to a month or two. I can't get it into his head that even if it "only takes a day", as he insists, that thats one day that we don't have to spare, we're already running late as it is. Its simply not possible to add features without pushing your deadline further back, and he just doesn't get that. Its unprofessional, and its frustrating.
My biggest problem as project manager just seems to be getting people to work on what they're supposed to be doing. It doesn't help either that my manager keeps finding other things for the programmers to do. Some of the developers are professional, and will just focus on doing their jobs without requiring nanny assistance, but some of them you seem to need to check up on several times a day to make sure they're not doing the things they *want* to be doing. I shouldn't have to do that.
Actually.... (Score:3, Funny)
Slashdot Reader != Slashdot Poster (Score:3, Interesting)
That aside, my experience in software development (only 3 years) ball parking (1-3 days, 1 week-3 weeks, 1 month-3months) is usually possible, but tends to become wildly inaccurate beyond a few months. Regardless of what methond we use to determine timelines, some things always seem to slip, while others take a fraction of the expected time.
Re:In all seriousness, this is the wrong place to (Score:2, Insightful)
If you need that stuff, design it in from the start. Too many programmers worry about general design to make future expansion easier, while leaving out consideration for real, hard requirements that won't be implemented until later in the project.
And to avoid the problem with really bad bugs that are responsible for the (double it and add 5) estimation, take a little extra time to write exhaustive testing (as far as possible) of each module, indeed each function, to make sure it doesn't do something wrong when given values out of "happy path" input range.
Of course they can be estimated. (Score:5, Insightful)
Software development is not a science in the normal sense. Designing large software systems is an art. It cannot be pigeonholed. Stroustrup has a lot to say about this when he describes the 'interchangable morons' concept in the 2nd edition C++ book.
Anyway, read Death march by Ed Yourdon, and the mythical man month by fred brooks, and antipatterns, any time someone asks you for an estimate say 'two weeks' and then bullshit from there on.
That is how it works in the real world. The numbers are essentially meaningless, but the bean counters and suits have to justify their existance somehow :-)
Can you imagine asking Linus when 2.5 will be ready ?
Re:Of course they can be estimated. (Score:4, Insightful)
That's exactly the sort of attitude that has caused the sort of spectactular failures of software projects to be accepted as the norm. Software Engineering is *not* "hacking" or "coding" or "programming", it's *engineering*, like building a bridge or a skyscraper. Yes, those projects go over time and budget too sometimes, but they are the exception rather than the rule.
That is how it works in the real world. The numbers are essentially meaningless, but the bean counters and suits have to justify their existance somehow
The problem is endemic in the industry. The other Engineering professions require rigorous accreditation before they let practitioners loose in the world, like the PE (in the US) or the Charter (in the UK). But the software industry hires anyone, and lets them get on with whatever they do, with no real management or oversight or planning.
In a well analyzed and properly planned project, the actual coding stage is little more than data entry.
Re:Of course they can be estimated. (Score:5, Insightful)
If you're building your 57th e-commerce web site, which works roughly like the 56 you build before, you can estimate very, very well, and you can reduce coding to nearly data entry.
If you're solving a problem of unknown scope, which your team has not solved before, which the solution is not clear to, and analysis has revealed some but not all of the details, etc., then you are not very right.
Re:Of course they can be estimated. (Score:3, Interesting)
When looked at in the context of practical experience, this is quite false. We have been building buildings for at least several thousand years with some tremendous success and some spectacular failures. I live in Toronto where we were lucky (I think) enough to have the first major league baseball stadium with a retractable roof. IIRMC, the original cost estimates were in the vicinity $100 million (CND). When the stadium opened (pretty close to on time), the cost was actually around $480 million (CND).
I guess this somewhat proves you can estimate either cost or time accurately but not always both. My experience in the IT industry has shown that most problems can be over come with enough resources. Unfortunately, resources are not limitless and therefore consessions must be made. This generally means the completion date slips or functionality is reduced or a combination of both.
Re:Of course they can be estimated. (Score:3, Insightful)
It all comes down to experience with similar things. Like any other project, if a software project is very like something you done hundreds of times before you'll know pretty well how long it will take. If it's unlike anything you've done before there isn't even much point in guessing.
Thing is, in the real world development happens once and then the "project" is duplication (ie; For a "So-and-So Homes" place - Design house once. Build house hundreds of times), but with software duplication is instant - just copy - the project is the original design. (ie; Design software once. Burn 10,000 CDs)
The fact that many companies design their own software, even when they're not software design compaines is the problem. If you were a real-estate place you wouldn't build your own cars, or photocopiers, why do you design your own software? Moreover, why are you surprised when it takes longer than you estimated?
Re:Of course they can be estimated. (Score:2, Insightful)
If the software industry were saddled with the same level of process that exists in other engineering professions, we'd still be using character-based software, the web and the internet as we know it today wouldn't exist and most business would still be conducted on paper.
Re:Of course they can be estimated. (Score:5, Funny)
Re:Of course they can be estimated. (Score:5, Insightful)
That's exactly the sort of attitude that has caused the sort of spectactular failures of software projects to be accepted as the norm. Software Engineering is *not* "hacking" or "coding" or "programming", it's *engineering*, like building a bridge or a skyscraper. Yes, those projects go over time and budget too sometimes, but they are the exception rather than the rule.
"
But that's simply not true. Writing software of anything that is non-trivial is not the same as straightforward engineering. For a start there is the rate of progress, how many people have 30 years + experience of building 50 story + buildings. How many people have 30 years + experience of dealing with terabyte + sized datasets?
When buildling software previous code can be reused for a very small amount of effort, when building skyscrapers the previous design can be reused for only marginally less effort than the last one.
Compare the difference between building a C compiler from the gcc source and the world trade centre from the blueprints.
Essentially the estimate is
Time = [time to do the bits we know how to do [accurate] ] + [guess for the bits we don't know how to do [inaccurate] ]
With software, the first part of that expression tends towards zero since most things we know how to do we can reuse code, whereas with building it remains a large accurate estimate.
The error here will be of the form
Error = [variance of inaccurate terms] / [total]
For the example of a skyscraper whos construction is mostly a known method this will tend to a small number since the inaccuate term is much smaller than the accurate term, but for software with reuse of all the known methods of coding this will tend to 1 - i.e.. 100% error in the estimate and hence the conclusion that it's worthless to even bother estimating.
In my company we can accurately estimate how long projects will take providing the projects are mostly identical to ones we have done before, and if this is the case it generally costs the client more in programmer time in meetings to dicuss the cost of the job than it does to write it.
Re:Of course they can be estimated. (Score:2, Insightful)
We'd like it to be so, but it ain't.
The behavior of bridges and skyscapers is determined by classical physics, which allows us to make precise predictions.
The behavior of computer programs is governered by complexity theory, which tells us that any reasonably complex program has non-predictable behavior. And the manageability of software development depends on human understanding and appreciation of code - there's an aesthetic factor.
Certainly things could be better...the fact that something has a large component of art doesn't mean that there aren't areas of mastery for a practitioner to study. But at its heart, the creation of complex software requires a creativity and intuition that cannot be set to a timetable.
(Yes, one can "engineer" art to some degree - popular music being an example, where teams of marketers follow formulas to construct the next boy band. But that does not result in a quality product that stands the test of time.)
But the problem still applies to the design phase.
Re:Of course they can be estimated. (Score:3, Interesting)
The software industry doesn't hire anyone. Software companies hire people, and a company that behaves like you described won't be around for long if software is their main source of revenue.
Also, management != good software engineering. Planning != good software engineering. These are all factors that go into a good software project but people shouldn't think that if they draw class diagrams before they start coding, they're suddenly software engineering.
On the other hand, you need to look at what's best for the project - it isn't always a large, formal approach to software, especially for small projects. Being too rigid can be as bad as being too loose with your design. I've seen projects design themselves into a corner before the first line of code is even written.
Software Development==Engineering? (Score:3, Interesting)
I agree with you up to a point. I am an engineer. I have worked in Process Engineering, at AMEC, and now work in Design engineering. I have not done much coding, but I think that software development probably relates most closely to design. As I said, I now work in design. In design you can estimate a schedule, but that schedule is dependant on our everything going perfectly the first time, which we all know doesn't happen. This does also not include problems with parts we have to design around, which we then have to wait on, or a change in requirements of our part. (Sound familiar yet?)
This is all in the conceptual, design phase. This doesn't include the acutal production of a physical part. That all happens later, after our 3D model has been packaged correctly. Once the physical part has been made, then there are the joys of testing and testing and testing...
What I'm trying to get at, is that I've experienced several forms of Engineering (Yes there are many), and I think that Software development relates most closely to Design. In design, there is no reasonable way to schedule out how long things will take. We just make an estimate based on what's happened in the past, and change things as we go along.
Just like building a skyscraper (Score:4, Insightful)
Re:Of course they can be estimated. (Score:5, Insightful)
After ~15 years in the industry, I've found that one thing that makes a huge difference is the experience of the team, and the familiarity between the actual engineers and the project management.
As you have experience solving a variety of classes of problems, you can predict with increasing accuracy the time it'll take you to solve later problems. And as your management sees you getting increasingly accurate in your estimates (based on past projects) they can create better and better schedules and estimates for the project as a whole, and have a better intuition for the gray areas of development, or the greener developers.
Projects that tend to go off into the weeds have included (in my experience) wholly green teams, wholly green management, or areas of development that are outside the areas of expertise of one or both.
Re:Of course they can be estimated. (Score:2, Insightful)
An experienced software project manager can usually be quite accurate in estimation of effort for a well analyzed software project.
This, however, highlights a few problems in The Real World:
Am I too pessimistic? I don't believe so.
Re:Of course they can be estimated. (Score:5, Insightful)
However, the biggest thing to remember is that no matter what estimation method is used, the simple fact that a methodical approach to analyzing the problem will almost always yield a reasonable estimate.
The main reasons projects go over schedule and budget are:
Risk management is indeed the key. As the project manager or lead engineer, it is your job to predict what potential risks might be and attempt to mitigate them on a cost-effectiveness basis. You can still be bit by bad luck, but you can minize the chances it will strike.
Sure they can... (Score:3, Interesting)
And the remaining 5% of the project takes another 95% of the time.
from a Consulting viewpoint.. (Score:3, Informative)
To accuratly plan a software release you must have the project, and all it's complexities and nuances down COLD. otherwise you are not giving an estimation, you are giving a guess based upon incomplete knowledge.
The question becomes, do or, can you, know the complete details of the project? In this, software development is NOT like manufacturing, but more like home construction.
Think about it.
Re:from a Consulting viewpoint.. (Score:5, Interesting)
The bulk of the work of programming consists of getting all the complexities and nuances down cold. Once you really and completely understand what is required, coding is trivial.
This leads to a thoroughly unrealistic method of estimating software costs:
1) Work for months on the specs.
2) Get the customer to sign on to those incredibly detailed specs, even though he doesn't understand them.
3) Go and code it, no spec changes allowed.
8-)
The article mainly talks about the mathematics of estimating complexity. This is a lot like the proof that you cannot determine when or whether a computer program will end -- it's true for pathological programs, but it has little relevance for the real world. You try to write the code so the conditions for the program to end are clear. If it gets into an endless loop, you probably got a conditional expression backwards and you'll recognize it immediately once you figure out which loop is running endlessly... Likewise, there may be well-defined specifications for which it is impossible to estimate the coding time, but the usual problem is poorly-defined specs, which obviously makes any estimate a guess.
Fixing the endpoint? (Score:3, Insightful)
So the easy response is that bad management in general, and bad project management in particular, is responsible for software project failures. While this is no doubt true, the next question has to be, why do software projects have such bad project management?
I don't have a good answer, but one thing that occurs to me is the lack of a fixed endpoint. When an oil refinery ships its first load of POL, it is complete. When an aircraft carrier launches its first plane, it is complete. But the amorphous and mallable nature of software means that it is hard to define an exact endpoint, and very hard to avoid changing the definition of the endpoint as the project proceeds. So things keep "creeping" along until disaster occurs.
sPh
Re:Fixing the endpoint? (Score:2)
Re:Fixing the endpoint? (Score:3, Insightful)
Note also the enormous difference between building the first 747 / skyscaper / nuclear submarine and the 15th or 1500th of each.
This is why software projects fail (Score:2, Insightful)
In software, however, most projects do not rely on known procedures. It is fairly easy to estimate the costs of creating 1000 different window layouts, which is a known procedure, but it is a very difficult task to estimate the costs of implementing the layouts.
If software would use as much energy on estimating each new task as construction projects did, developing software would be extremely expensive. Just imagine that you had to do a while-loop according to an ISO standard, and another while loop according to another ISO standard, because the two while loops were in different functions that were categorized differently by a third ISO standard. Instead we hire a bunch of programmers and make them program themselves. Sometimes we do it a little more complicated, like Open-Source, Xtreme Programming etc., but it's still a bunch of programmers hacking around.
The trick is to manage it anyway - and that's why managing software projects will always be risc management and not very predictable.
Lars.
Re:Fixing the endpoint? (Score:2)
I don't have a good answer, but one thing that occurs to me is the lack of a fixed endpoint.
That's also a failure of management. All projects should have a requirements spec that describes exactly what the system is supposed to do.
I think the fundamental problem is that people don't want to spend money on all the "non-coding" documentation. Good documentation can take half the time of a project. It seems so much more "efficient" just to put a hoard of programmers on the project and crank out code, but it ends up costing a lot more.
Components (Score:4, Interesting)
Yes but. The important components of a skyscraper are steel beams. Put them up correctly, after calculating loads and stresses, and it doesn't matter what the twenty tons of stuff you have sitting on the 27th floor is. It doesn't matter if the beams come from different foundaries, either, because the specs are clear enough (dimensions, strength, where the bolt holes are).
Now try putting together a typically complex business software solution, meshing a bunch of different, reasonably good, existing programs and components with some custom code and configuration. Even where there are reasonably good standards spec'd in some areas of the project, if you're not solving new problems it shouldn't be a software engineering project at all - it should just be system administration using the available solutions. That it's real software engineering means you're running into unpredictable surprises where the components at hand don't fit without a great deal of extra labor.
A parallel can be found in work on the portions of the New York City infrastructure that are under the streets: We still have wooden water mains in some places from the mid-1800s, mixed with gas, electric, steam pipes, sewer, subways, gas lines ... most of which was not documented to current standards on either installation or subsequent changes, despite most of it being reasonably well done by the standards of its time (pretty amazing, those wooden water mains still working, right?).
So what happens when we finally go in to improve one of the services - say, lay new water mains? Other stuff is found that's in the way where you didn't expect it, or that need's fixing on examination when you didn't expect it. Meanwhile you've got the street ripped up but you have to cap it again quickly or traffic is too snarled for too long. So a single block's 4-week project can stretch out for over a year - dig up the street, fix one problem, discover more, recap while designing and provisioning the next stage, repeat - because it's all stuff that needs to be done once you get into it, that can't be properly assessed until you get into it.
Well, software in the real world isn't as old as New York, but if anything it's more complex, and the layers of crufty stuff that have to be accommodated in current projects are as considerable, and often as poorly documented by current standards (which will always advance so as to obsolete whatever we do now). Building a skyscraper, by contrast, is just a sysadmin job. Put the beams and bolts in the normal places, and it stands.
The reality steps in (Score:2)
So it's all a loss? Nope, but you have to remember that it's not an exact science. It involves replanning, knowing your work force, letting the work force plan on their own, more replanning, experince, guesses, and whatever it takes. Honesty is also high up on the list, and not trying to do huge amounts of work in one go. Heck, there is so much about this subject that it would ages to describe them. My suggestion is, go out in reality, work, and learn.
Projects != R&D (Score:2, Insightful)
Spam alert! (Score:2)
Be afraid of the unknown (Score:2)
So, never say "How hard can that be?" before having coded up a small working prototype.
Re:Be afraid of the unknown (Score:3, Insightful)
Figure the time to do the parts I understand.
Count the parts I don't understand. Allow a very long time for each of them.
Add it all up, then multiply by 3
Incompleteness (Score:3, Funny)
Lewis also provides a link to this "introduction to incompleteness" (a fun subject in itself)
I started writing a paper about this topic once, but I never finished it.
-WetDog
Re:Incompleteness (Score:2, Funny)
> started writing a paper about this topic once, but I never finished it.
Me t
Estimates based on motivation (Score:4, Insightful)
My company develops turn-key systems. Sometimes we also develop custom solutions for our customers. Our customer base has increased steadily after the dotcom crash, when we switched from products to services. One of the reasons our customers like us is that we don't bill projects by the hour. We will the project on a fixed price, not to exceed, basis.
The programmers who work with us on a contract basis don't bill us by the hour either. After we have the design and we distribute tasks and prior to submitting the final estimate, we ask contractors to place a fixed bid.
We've done six major projects like this since March, and in all cases we finished within budget and on-schedule, and the systems are currently in production. They are all mission-critical systems running in either robotics environments or high-availability networks.
Our economic motivation is then to do things well and quickly in order to increase our profits. That also enables us to move on to the next project faster than slaving over some customer in order to bill the maximum hours.
As far as development techniques go, we adopted XP earlier on and it's working for us.
Cheers!
ERe:Estimates based on motivation (Score:3, Interesting)
A key to fixed-cost is that it takes practice. Try it on a small scale before you commit to it on a larger scale, to avoid large-scale failure...
Re:Estimates based on motivation (Score:2, Insightful)
Re:Estimates based on motivation (Score:3, Insightful)
How do you reconcile this with the nature of XP projects to deliver something that is noticeably different from the customers original conception of their need (but that in fact fits very well the customers need as learned over the course of the project?)
I'm seriously interested to hear about folxs who have figured out how to marry an agile development process to fixed cost contracts.
Re:Estimates based on motivation (Score:3, Interesting)
I work with the customer to divide the project up in to phases / steps / iterations / releases / whatever. Group the most vital core pieces together and do them first, at a fixed cost. As requirement change, these changes either go in to future fixed cost releases, or they are done hourly if requested. Thus, the overall project is not fixed, but at each stage the customer knows what they are buying at what price, and does not have the worry of the "meter running".
There is some related explanation (not a sales pitch) about it on my web site:
http://kylecordes.com/story-182-shared-risk-prici
Matching XP and fixed requirements (Score:3, Interesting)
A couple of posters asked this question above: How do we reconcile XP short develop/test cycles with a fixed project plan + bid?
The answer is simple: During the planning and estimate parts we focus on defining the problem domain and a set of solutions for it. We don't focus on too many implementation details.
XP techniques are applied to solving each specific problem found in the requirements. For example, the problem may be something like "how do we decode this math-intensive file the fastest?". There usually are two or more answers to such a problem. First we define an interface, then we try two parallel, different solutions and try both. The one that meets that criteria best wins, and we move on to the next problem.
The thirst for features suffered by some people is often the result of poor design choices in the beginning of the project. If additional features are required, and the analysis was done correctly, you'll find that these new features simply extend solutions you were already working on (or solved). Thus, XP comes to the rescue again by letting you add the new feature without throwing the schedule out the window. Think about it: If a new feature forces someone to re-write a whole system then something must've been overlooked during the requirements analysis phase.
The most important part of this process is not to start coding and testing until the business requirements are clearly defined. We've been guilty in the past of coding before understanding the problem completely; we try to avoid that trap now. That is probably the single most relevant cause of software project delays.
Cheers!
EMuch more like manufacturing than physics. Mostly (Score:3, Informative)
The reason that estimates are always wrong are *1* unclear requirements, *2* changing requirements, *3* complicated user interfaces, *4* weak focus on testing.
I find *1* to be the biggest difficulty. The prinicipals of a software project like to say things like "Automate timeclock operations" but as a developer, you need *A LOT* of information to do that. When you ask questions like "I understand that you do not want to allow any changes to a pay period after the checks have been cut, but then what are we going to do when travelling workers report their hours late?" Management thinks you are being a pain in the ass, but if you don't get it right, your project will fail.
I agree with taking a realistic estimate and doubling the both the developement and the testing estimates.
There are four parameters (Score:3, Insightful)
- Quality
- Quantity
- Deadline
- Costs
In a competitive environment with humans involved, up to three can be specified. Not four. Good examples are:
- Many guidelines for managing software projects tell you to reduce quantity when you get near deadline.
- Some customers have a specified budget but really don't know how much software they can get for that money. They prefer to have costs fixed than to have quantity or deadline fixed.
- Sometimes deadline is so important, that costs may 10-double in order to reach that deadline, and quality and quantity may get reduced a lot in order to finish the project.
It is extremely important to realize the meaning of all four parameters before you can talk about estimating project schedules.
Lars.
Analysis, Design, and Project Management (Score:2)
That's only half the battle. Once a project is underway, keeping scope in check is critical so you need good project management. If you build a great estimate through analysis and design and then throw it out the window when you start writing code, you'll never have a good estimate.
Where do major providers like Microsoft and even Mozilla go wrong? Simple, they either jump in and start coding before they've completely settled on what they're building or they change their mind in development about what they're building. Either way, it screws up delivery dates.
Re:Analysis, Design, and Project Management (Score:2)
> Once a project is underway, keeping scope in check is critical so you need good project management.
Yes, requirements creep seems to be the main problem. Since software is "intangible", management seems to think that they can change the requirements when it's half done, with no adverse consequences. The same management wouldn't dream of doing the same thing with their new office building. (E.g, doubling the space requirements after the foundation and half the floors have already been built.)
In general, it's a management problem from top to bottom. Start with vague requirements, disallow sufficient time and money even for a minimal implementation of those vague requirements, put underqualified and undisciplined staff on the project, change the vague requirements while the project is in progress, and go on death marches when the inevitable happens.
Software projects aren't going to behave well until management imposes an engineering discipline on them. And the biggest issues for management are (a) deciding what they are going to do before they start, and (b) deciding before they start whether the project is worth the time and money it's going to take, and cancelling it up front if they don't like those times/costs, rather than just trimming the time/cost projections down to something that their organization finds politically acceptable and then going ahead with the project under falsified time/cost projections.
The short answer: no (Score:3, Insightful)
The biggest problem I've seen is requirements creep. Most often, you don't have a firm set of requirements to start with. Management and programmers both have a tendancy to view requirements documents and other formal software engineering practices as superflourous. The problem is that without a firm set of fixed requirements, you are always trying to hit a moving target.
Another problem is attitude, mostly on the part of management, but programmers are guilty too. One faulty attitude is that we are conditioned to expect immediate results. There's also a prevaling attitude that there is never enough time to do it right, but there's always enough time to do it over. This leads to undocumented, unmaintainable masses of code that either gets thrown away after a while.
Even worse, you wind up with garbage code that SHOULD be thrown away and re-written from scratch, but winds up getting patched and modified for years. I can't tell you how many times I've had a manager say "there isn't time to rewrite it, just patch it". That would be OK if you are only going to patch it once -- but you wind up patching the same program a half dozen times, and it winds up taking twice as long to do all the as it would have if you had just rewritten it from scratch.
SW Schedules (Score:2)
1) Lack of up front planning - too many projects fail to do proper initial planning - specifically defining the problem to be solved, producing detailed product requirements, and a detailed project plan (and then sticking to it).
2) Late (or incomplete) requirements - if you went to an architect half way through home construction and wanted to change the design of a house; you wouldn't be surprised if it fell behind schedule and went over cost.
3) Poor risk management - failure to track dependencies, too many high risk dependencies ("we'll build it on the next OS release, with the new compiler, and that SW package that our start-up partner will finish next month"), failure to make and execute contingency plans.
4) Failure to heed Brook's Law ("Adding software developers to a late project - makes it later.")
5) Failure to have read Deming ("You cannot test quality into a product").
6) General design failures - not assuring that product is scalable, reliable, testable, etc.
7) Failure to place a senior developer on the team that knows about the previous issues.
Doesn't work for anything non-trivial (Score:2)
The best defense I've heard is that "Yes, everyone's estimate will be way off, but they are independent estimates of different pieces of code and when aggregated the standard deviation drops to a reasonable value". IOW, the estimate I pulled out of my butt will be way optimistic, but your estimate will be pessimistic, and it will all cancel out.
There are a few problems with this, rather nice and neat statistical trick:
1) As Michael Milken found out, the observations are not independent -- there are two many interactions between the components being estimated. In Milken's case, he argued that a diversified portfolio or junk bonds would have high yield, low risk charactersitics. Unfortunately, the performance of shaky companies in a market downturn is rather strongly corelated.
2) You need something objective to estimate. In our case, we measured the number of easy, medium, and hard member functions of classes that had to be implemented. See the problem? You need to cast your interfaces in stone, external, and internal ones, right at the start. On simple projects this is easy, but not on hard ones, as much as we all agree it is desirable. There is something called learning from one's mistakes and it will happen with anything novel.
3) This presumes that the design is sound. To ensure this we reviewed and analyzed and studied, and "damnit, you indented 3 spaces instead of 4...", well you get the idea. The closest scrutiny will find the obvious bugs, but not the really tricky ones.
4) This technique does not encourage the one thing that saves you in the face of change -- adaptive and modular design. You make things modular so change affects as little as possible, and you make things adaptive so change is as painless as possible. IOW, you plan for making changes bacause of mistakes. Naturally, this violates (1) above, so it is not permitted. The mantra is "Design it right the first time!" We know that we can get 95% or 99% or maybe even 99.5% of it right, but never 100%.
In the end, sure, we "finished" on time, but, er what was finished didn't work very well, and had to be rescued by the few who knew what was going on. To be fair, the design efforts and documentation helped provide a somewhat modular system, but the really important parts weren't documented -- we had reams of paper describing the "trees", but not nearly enough describing the "forest" as it were.
So, I'm skeptical.
I've heard that these techniques encourage "discipline" and help mediocre programmers contribute acceptable code. Well, where I work now, we have a policy of not hiring "mediocre" programmers. I can dump a suspicious log on someone and be assured that they WILL fix the problem -- I don't have to argue that there IS a problem ("but, the process, the process says this WON'T happen... your log must be a lie...")
Of Course They Can Be Estimated... (Score:2, Insightful)
Home of
It's all a bunch of bull.. (Score:2, Insightful)
He very carefully laid out the algorithm - I don't have my textbook handy, but it involved elementary mathematical operations on estimated man hours, estimated lines of code, estimated overhead, etc., then at the end -- and I am not making this up -- they multiply the result by a "magic number".
Where did you get the magic number, oh sage of the ivory tower? Well, we just made it up -- it seems to work.
It hit me then that the whole discipline of estimating cost completion is all bullshit. You might as well be estimating with a crystal ball or divining the future with chicken bones. Since I've been working, the best advice I've gotten so far has been "take how long you think it'll take and double it".
second that opinion (Score:2)
When constants are constant and when they aren't (Score:3, Interesting)
There is nothing wrong in principle with measuring what has happened in the past, and using that to predict what will happen in the future, before you discover why it works like that.
For instance, if you measure that throughout the year, the average time between sunrises is 24 hours. You can use that number even though the only explanation for it that you might have is "it seems to work"
Of course, when you apply this to software develpment time estimation, it falls down for a number of reasons. It's not constant across technologies. It's not constant across types of project. It doesn't take into account the variation in technological risks (ie if you have done something like this before, you will spend less time finding ways to do stuff). It doesn't scale linearly with the size of the project. It varies across individuals. etc. etc.
Our formulas (Score:2)
Of course that doesn't stop the managers from asking every day, starting on the first day, whether or not the 3 month project you're working on is complete.
I've done it (Score:2)
Try common sense (Score:2)
The problem is that the term "software schedule" is too wide a field to say anything meaningful about it. If you want to estimate how long it will take to put together a customized ecommerce web site, and the organisation has already built 5 of them, there is no problem. If you want to solve some problem that hasn't been solved before, it could take a week or a hundred years. Recognising the difference between these two cases is less simple than one might expect. And, if there's genuinely no novelty in the problem one should not be writing software at all. Someone should just write an application to solve that general class of problems.
People get unstuck when they break the problem down into small chunks and then guess a number on each chunk. Often the initial decomposition misses crucial interactions and needs to be refactored later on. This is a bit like answering the problem about how long is a piece of string, by saying - well the string eginning, a middle and an end, I estimate each piece is 5 inches long, so the string is probably about 15 inches long. Unless the breakdown has brought genuine insight into the unknown aspects of the project, the estimate it provides is worse than useless. However, since one can then stick things out in MS project, print out pretty GANT charts, etc, this estimate is given more credence than a number generated by just reading the spec and making an educated guess.
Part of the problem is that it's described as software engineering. Then we get all sorts of morons saying: civil engineers can tell us how long it will take to build a bridge, the problem must be that software engineers are unprofessional or that the subject is in it's infancy - things will improve. No, they won't, for the same reason that mathematicians couldn't tell you how long it would take to solve Fermat's last theorem.
More Design Work = More accurate Estimates (Score:2)
Q: It has to do *this*, how long?
A: X days (Not very accurate)
Scenerio 2:
Q: Find out what it has to do, spend TIME specifiying it, then tell me how long.
A: X days (Can be very accurate)
The Problem I (10+ yrs pro developer) keep running into, is that you figure out what it is to do, specify it very well, and then as you start developing it and delievering pieces for review, that specification is changed and you are plopped solidy back into Scnerio 1. Worse is when you think you're done, and begin QA and get SLAPPED back into Scenerio 1... or even Scnenerio -1 where you are trying to hack your guess at how it works into how it really should work.
M@
2 weeks (Score:3, Insightful)
Gung-ho programmers are optimists. Couple optimism with the ennumerable factors involved in programming a non trivial application and you will get what we have today.
By the way. I am a programmer and I have little to no confidence in my time-estimation abilities, or anyone elses. It has taken me 14 years to come to grips with that.
Software like a factory (Score:3, Interesting)
1. Discipline. Your average programmer will have read about various programming methodologies, but skipped past the parts which would make their code an easy-to-reuse template in lieu of fast development time. As with any gamble, you should know at exactly what point you want to quit, have an A-line for version 1.0's feature set, all that jazz.
2. A big code base. Because of step 1, or maybe just a lack of previous projects, one's code base is typically limited to what you can find in a computer science textbook. Having a good database of classes and patterns that have turned out to be useful, and having easy access to this database for the information you need is the difference between a library and a code base.
3. Incremental development. Throwing together a large software project, all at once, and then testing the whole thing is very tempting, and happens more often than most people like to admit. What should be happening is a series of incremental integrations into the final product, with unit tests of each part. Otherwise your large project can become a giant, complex nightmare. Making complex software shouldn't be made quite so complicated.
While making a "software assembly line" takes slightly more work and trouble than your average car assembly line, it has incredible cost savings in the long run.
Estimation is very possible. (Score:5, Insightful)
If we built homes like software we would all be living in the street, penniless...
The major issues I have seen revolve around a lack of scope and cost control. In many cases it is because there is little penalty for being late or over budget. In cases where penalties exist it is often beneficial to then over estimate the effort or cost required. Then once the money is approved, using it is becomes easy.
Going back to the analogy consider the following:
Scope
If you were building a house, each piece has a specified cost, known in advance to a very large degree. In addition, altering the scope itself often incurs a penalty, because the work is not done by the owner. You plan a three bedroom, 1.5 bath home. Midway through planning you decide to make it a two bath home instead. The architect will charge the "re-scoping" fee and the builder will add the material fee. Now do the same after construction has begun. The architect gets their fee, the builder adds the material and resource costs, plus a "revision" fee for changing your mind after construction begins.
During a software project, it is common for individuals to approach the developers and ask to expand the scope. This would be analogous to approaching one of the work crew and asking them to just add the extra half a bath. The difference is the work crew would get fired, and the developer gets bonus points for adding the feature, either directly or indirectly.
If the developer chooses not to do it, or pushes them to the project manager, the client may label them uncooperative or difficult to work with. The project manager not wanting to be labeled either may coerce, cajole, or beg the developer to accomplish it, without a scope revision. Failure to do so by the developer results in real financial impact at some point, and offers little incentive to hold the line.
Cost
I call this the "Porsche syndrome".
I go into the Porsche dealership and see a new 911 Carrera Coupe. Smiling the dealer offers to sell it at a deep discount, with options and accessories $84,000 (U.S.). Whewwww baby!!! I cannot afford that. "Look," I tell him, "my wife will never approve that, you need to get it down to $28,500 tops." Would any of us expect to have the price cut down? By half or more?
Okay, how about "Look, what will it take to get it under $30,000? Seriously now, what do I have to give up" As the dealer is escorting me to the door he explains the only way I will get this car under $30k is with a mask and a gun or from a scrap metal dealer.
Yet, daily we go to developers and tell them to do the same. We ask for an estimate and then go back with "This is too much, it needs to be smaller or it won't get approved!" --Insert blank stare here--- The idea that if something cannot be cost justified it should not be done, is often lost in the "request" itself.
To nearly guarantee a project is on budget and time requires things many companies are unwilling to provide. Strict scope control procedures, with oversight by the person responsible for the money. That means each change, regardless of how trivial must be approved by someone above the project management team with business justification. It also means that requests for scope change cannot be made to developers directly, by anyone.
I was very happy with the people who built my home. When speaking to many of my friends and coworkers who built their homes, they describe it as a process akin to having their flesh removed. Everything required such effort and detail that many would not do it again.
Most of them were looking for the relationship to be like one at the office. We all want to get along and help each other out. This is not a commercial arrangement, and when we put the commercial context around it, we see it many offices lack structure.
Internal organizations can be setup like commercial ones, but it is usually unwelcome as the perception is everyone should be working for the greater good of the company and this has the appearance of bureaucracy. Even if inaccurate, everyone "wanting to get along" prevents it from being implemented.
Re:Estimation is very possible. (Score:3)
The same can be done in software. The solution can almost every time be a used product or older product. The ony thing you gain by shiny new product is maybe performance, and that is a big maybe. and trouble free operation... well in software that is not the case.
The simple fact is you can have what you want if you widen your scope.
I drive a 911turbo and I have a Testerossa in my garage. I also only make $40K a year and live in a 980SQ foot house in the nicest neighborhood (not rich-snob land) in my city.
I also have more spending cash than my $180K a year friends, they will never own a testerossa (Unless I sell them mine HA!) and probably never drive a 911turbo as their daily driver (except winter.)
Why do I succeed and they fail? I have what I want, I got my toys. I also only have a $700 a month mortgage... they have $2000 a month, My cars are paid for and older (Porsche is 1989 The testerossa is 1986 and needs a transmission and interior.. I need to install the rebuilt tranny and havethe leather seats re-done) My boat is from the early 90's they pay $600.00 a month for their boat payment,$900.00 a month on their lexus and I have to take them out once a month because they cant afford to have fun.
Only a very rich man or a moron demands to have the NEW items when a used item or older item is a perfect substitute.
Work asked me to find out how much to replace out SQL6.5 servers with SQL2000. 50 user licenses for 10 servers...
I asked why, their response was because it's new.
They wanted to spend $20,000.00 for no reason whatsoever... and in fact would have caused downtime as the software that relies on the SQL server is not compatable with SQL2000 yet.
That is the porsche syndrome... spending money foolishly and for no reason whatsoever... Unfortunately I.T. and I.S. is rife with stupid spending.
Upgrading when needed is important. I fully support spending money when it increases reliability and productivity and therefore positively effecting cash-flow. spending it for only bragging rights or because there's a new one available?
That's the failure of many people and companies today.
Software can be scheduled... (Score:4, Interesting)
Painless Software Schedules [joelonsoftware.com] is a great one and you will get sucked in just following the links from this one essay.
Reductio (Score:3, Interesting)
Next get together a team of programmers. Set them to work on a program which determines proves {insert your favorite unsolved mathematical conjecture here}. It turns out you actually don't need the team at all, just run your software project estimator and if it comes out with a finite amount of time to complete the program, you know that the the conjecture is true.
In other words your software estimator can be used to solve the halting problem.
OK, this is a joke, but it points something about the question. I once had a CS professor who required that we right requirements statements for all of our assignments. She forbade us to include halting times, because "you can't predict whether a program will halt or not." To which I wanted to reply, "About that 'hello
The lesson is that there are some cases to which a rule like this applies and others to which it does not. There are some projects that can be estimated with simple tools, some that can estimated with complex tools, and some that are not practical to estimate at all. Even fairly seat of the pants kinds of estimates work pretty well on relatively simple problems, providing you break things down a bit and do an honest estimate the costs on individual deliverables and the individual functions you know you'll need to make them work. About the only methods that never work are pulling a number out of the air based on how much the project scares you, or using wishful thinking (whether the source is your boss or you). Nobody can give good estimates when you spring the question on them with no time to prepare. My boss's most (and my least favorite) questions start with "how hard would it be.." and my most favorite (and his least favorite) answers start with "It depends..."
Nonetheless, my experience with past projects of the kind that I do means I can do a pretty good job with relatively unscientific tools, provided the problem is like one I've solved before. However if you are writing software for space flight or some other kind of highly complex mission, I could estimate until I was blue in the face and it wouldn't be worth a damn. You want to hire somebody with experience in such projects and who has methods of estimation well calibrated from similar past projects.
I think the particularly difficult cases are ones inolving software maintenance -- extending software to perform things that weren't originally factored into the design, or adapting the software to run when the systems it depends upon change in some unpredictable way. These are cases where surprises can throw the best laid estimates well off.
Estimates should include debugging (Score:3, Interesting)
* A tester or test suite exhibiting the bug
* Someone recognizing that it is a bug
* Enough data being gathered to define the bug ("It hangs sometimes" or "I don't think the results are always correct" doesn't cut it).
* Enough eyeball hours to find the bug (this in itself makes the process equivalent to solving a crime. Do we ask the cops to schedule crime solving?)
* About two minutes (average) to devise and implement a fix
This has to be done for N bugs, where N is unknown. People who think you can estimate software development schedules with any accuracy are either dreaming or assuming that they just have to estimate how long it will take to get it coded, not how long it will take to get it working correctly.
-- MarkusQ
The problem with software development. (Score:3, Insightful)
The customer claimed it was because I could not seem to fully complete a component of the project. What they really meant was I could not fully complete a component of the project before they would request a change to that component that in some cases required a complete rewrite of the component. They didn't think it was a big deal to add a button here or there in the application after all it was only a button. Never mind the fact that each of those buttons required stored procedures to be written and existing stored procedures to be altered. They would get upset that I could not make their requested changes in a day when they wanted to completely alter the way the interface to the application worked.
The bottom line is most people who don't know anything about software development don't think it is a big deal to add a feature here and there at the end of the development cycle. I try to equate software development to carpentry. Sure I can add another door in the center of those cabinets, but don't expect it not to affect the other doors and their space within.
Not just experience counts, so does Humility (Score:3, Insightful)
Personally for me, I tend to do the best hourly breakdown I can and then double it before submission. This is normally not too far wrong (say one week on a 3 month project). The double factor allows for inaccuracies, meetings (which really do take time !), and spec changes. I may add more "fudge factor" depending on my feelings for how well the spec is sorted out and the quality of management (i.e. weak management will allow spec changes every week, good management will filter well).
ANdy
PM Estimates (Score:3, Insightful)
ME: How about a spec ?
PM: You're kidding
ME: Roughly 6 weeks.
PM: Nah, too long we'll never get that past the customers, lets call it 4 weeks.
ME: Not again remember what happened last time, you chopped my estimate ?
PM: Don't worry I won't hold you too it, this time!
PM: That work finnished ?
ME: NO, two more weeks.
PM: You said 4 weeks, look here it is in the plan.
ME: I said 6, You said 4 weeks, and that you wouldn't hold me to it.
PM: The only thing I can fault you on is your estimates, they aren't very good.
ME: You £$%&* git !!!
And practically every project manager does the same thing.
Why engineer failure into the plan ?
More mundane reasons for underestimations (Score:5, Insightful)
Underestimation as a Marketing Tactic
AKA "Vaporware". Even if marketing knew when a product would be shippable, a particularly cinical marketing department may claim it to be earlier, thus freezing competitor's development.
Lack of Feedback (Moving Targets)
Software engineers are particularly bad at estimating because they have never done what they estimated. They are given a large project, give a large estimate, start working on it, and the project changes in the middle in a major way. This is a moving target; the estimate no longer applies. Major law of software development: You cannot change the spec or the development team on the project without impacting the real ship date. If you don't re-assess the estimated ship date, you are simply fooling yourself. Thus, they don't have any clue whether they hit the estimate or not. One way to defend against this is to break the project down into bite-sized pieces and estimate them; a small piece gives you a chance to do precisely what you estimated. Once you have that, you can have somebody track your estimates, and come back saying something like "On average, you go one third over your estimates. Add a third to your estimates from now on, and we'll be accurate".
Management Estimates
Often, engineers don't do the estimate. The management or marketing people tell you what must be done, and how long you have. Sometimes this is done explicitly; other times, management may have a number in mind and shame a software team into agreeing with it by laughing off any number that doesn't match theirs. Business people often negotiate the ship date with the geeks, like any negotiate with any other vendor. To a suit, vendor negotiations are how you determine the "margin", or how much the vendor is making (like when you buy a car, you and the dealer come to a number that determines the dealer's margin). This doesn't work in in-house software develoment because geeks hold back precious little "slack" or "margin" (they don't get paid profits, they get paid salaries); in a decent shop, geeks program at flank speed all the time and always give the project 100%.
See Ed Yourdon's Death March or any of Ward Cunningham's Extreme Programming books for more details, and ways to avoid the above traps. Yourdon suggests that the head geek has to take a hard stand in scheduling to prevent business interests from setting both the project spec and the ship date. He especially tells you never to negotiate schedule, and to help the suits understand why you never do. Whatever number you estimate doesn't affect the actual ship date, so playing with that number is simply fooling yourself.
Extreme Programming actually has a "planning game" (sort of a ritual dance) which places business interests and geeks on the same side of the table. Two big rules are "The geeks may not reject any part of the spec" and "The suits may not reject any part of the estimate". Once the suits set the spec, both teams break it down into pieces-parts, line them up in order of what gets done first and the geeks give their estimates. From there, the suits can choose the ship date (and can instantly see how much product will be ready by then), or can choose a certain amount of project completion (and can instantly see the ship date). The fun part about this method is that the suits can change their minds at any time by changing, adding, or removing pieces-parts, and can instantly see how that affects the ship date. The other fun part is that breaking up the project into pieces-parts allows developers to do a (small) project they estimated. This allows people to track estimated versus real time, and to give developers feedback that lets them make better estimates. Such a team will start off with bad estimates like everybody else, but they will be able to improve rapidly.
Something that screws up time... (Score:3, Insightful)
1. Programmer comes up with new system in spare time while learning a language. New system, if polished, would actually make a nice application to sell to current clients. Programmer is excited, and shows "product" to highers-ups.
2. Higher-ups are excited, can see it may take a bit more work, and look into what it would take to get it to market. They tell sales and marketing to go see the programmer to have him demo it to them.
3. Programmer is excited, shows it to sales and marketing. Sales and marketing love it.
4. Months pass. Unbeknownst to the programmer, sales and marketing have sold it to a client, as part of the contract, to be a finished package by the end of the year - OR ELSE.
5. More months pass - higher ups finally tell programmer, and others, that this new system is wanted - and oh, BTW, it is wanted in Java - not in the VB it was shown it.
6. Three months are left to complete the project. Original programmer knows little Java. Other Java coders know little Swing. Architecture of app is changed from a simple app to a three-tier client-server system. Only two other coders have sufficient Java experience to code on it. The lead of the project knows no Java, and only takes notes at meetings.
7. Twenty-one days until deadline (ie, it has to be in QA in 21 days) - everyone sweating bullets knowing it can't be done. Oh, and BTW, at every meeting it seems like a new section not planned for is realized...
It was an ad-hoc system, and it is progressing as an ad-hoc system - a system that should have NEVER been shown to marketing and sales. I am not the programmer who originated it, but suffice to say it is a system that will be nice for our clients once it is completed. Fortunately, it sounds like things will be able to be smoothed over if we miss the deadline...
So remember, all you budding coders out there - if you create something in your "learning" time - don't show it to anyone BUT other coders. If marketing and sales come around, have them sign an NDA promising not to sell it or something - you don't want to release a product to market before it is done - quit "selling" vaporware!!!
Paper may be correct, but is irrelevant (Score:3, Interesting)
Now, this paper makes a hell of a lot more sense to anyone who's read Hofstadler's Godel, Escher, Bach, but I suspect that many, even most, Slashdotters have read this one.
What makes the paper irrelevant is that we don't use formal systems to estimate software. We use our own head. We use hunches. We use intuition. These things are informal systems, capable of forms of reasoning that no formal system can achieve. That's what Godel proved.
The paper is saying that you can't take a spec, give it to an estimator program, and have the program write the estimate. You can give the spec to humans who write estimates for parts of it, feed that into an estimator program (like a spreadsheet), and you can get an estimate, but you simply cannot remove the human from the loop.
Several points to be raised -- is it all academic? (Score:4, Interesting)
But herein lies the rub -- exactly how many software systems are "completely new?"
Damn few!!
The average software project in an average industry will be primarily a repackaging of previously solved problems.The majority of integration tasks will be sufficiently similar to previous integration tasks as to be known.
You will be left with a small number of "sub problems" which are unique and new. But now we have a situation where the caveats of the article are very important. Specifically, if we have decomposed the programming tasks to a sufficient degree, it should be the case that the estimation is tractable.
Also, it should be noted, that the author assumes that a good estimate is one obtained through formal methods that is objectively defensible. However, in project maangement, a good estimate is defined as one that is believable and acceptable to all stakeholders in the process. The method for obtaining the estimate is not important.
Moreover, good project management will include some significant up-front analysis. One common (at least common to companies with good PM'ing track records) is to run "monte-carlo" simulations of project work with large variances in schedule-v-actual work. With a run of a few thousand simulations, those processes that are most important to the time and budget performance of the project.
These "key" work packages are often non-obvious without this type of simulation work. However, with a good work breakdown structure and a good simulator, it is possible to generate a reasonably accurate picture of project performance based on what is not known.
This means that in the "real world" of business, the article's claim is irrelevant!!
We don't NEED objectively defined and defensible estimates. Instead we need estimates that the project stakeholders (which includes the people doing the work) can agree to.
We don't NEED our estimates to be generated by formal methodologies. Subjective estimates backed up by years of experience are just as good, and often better, from a planning perspective.
This whole article strikes me as another programmer trying to show how dumb the business people are. Hey folks, good business people KNOW that estimating is hard and that it isn't objective. But just because something isn't objective doesn't mean it can't be done well. It is possible to build models that compensate for unknowns if you can do enough decompossing of the problem to limit the unknowns to a well defined, small manageable few.
So, in the view of this PM, this is all just academic and has no bearing on the real world.
Re:Several points to be raised -- is it all academ (Score:3, Insightful)
Paul Robinson <Postmaster@paul.washington.dc.us [mailto]>
Why almost all the posts are off topic. (Score:4, Insightful)
Standard estimate (Score:4, Funny)
The deeper problem: why things really fail (Score:3, Insightful)
Yep, there's a deeper problem, and it's very simple. Suppose your manager asks you for an estimate, and you say "six months" because that's how long you think it will take. Your manager works out that the project will not succeed if it takes six months, and asks you if you can do it in four. If you say "Yes", you have just become a statistic.
Saying yes does not mean that you can do it if you couldn't before, it just means that you have lied to management, prevented them from doing their job properly. If your project would take six months, but it will not make money if it takes six months, then you simply should not start that project. Failing to realise that simple fact is the major cause of late/failed projects, IME.
If you have certain preconditions, then yes. (Score:3, Informative)
Two Cents From a Project Management Lifer (Score:3, Interesting)
If the business requirements have been properly defined and management discipline exercised to keep within the original scope, every estimate I've developed -- using a variety of methods over the year -- has been successful. But those instances where the specs continually change, the business requirements are "discovered" along the way and/or new requirements are added to the mix are all failures. This has been true whether I've led teams doing something "no one's done before" or the "same old thing" again.
Kudos to everyone here that has posted information on the REAL solutions in the form risk management, scope containment, good old fashioned discipline, and the like.
With 25+ Years of Experience, Hubris and Humility (Score:3, Insightful)
To take a reductio ad absurdum:
You are given the task of duplicating the functionality of Windows NT. Furthermore, you are given the source code for Windows NT in a .tgz file and the associated development environment within which that source code can be tested. The question now degenerates into "How long does it take me to copy the tgz file?" That can be accurately predicted by measuring how long it takes to copy files on that environment in general, and the estimated schedule can be predicted to absurdly high degrees of accuracy with enough benchmarks of the system's file copying performance.
Here's another reduced complexity angle:
Translate a program written in Visual Basic and convert it to C++ (readably).
You actually can sit down and convert a sampling of the program and get a measure of how long it will take you to do the whole thing -- the more you sample, the more accurate the measure right up to the point where you have converted the whole thing.
Here's another example with a bit less reduction in complexity:
You are given a working program but no source code, and some expert users of that program. Here we are getting into what might be thought of as "function point analysis" but really, it is much easier and more accurate than that since the program exists and works as it is "supposed" to work, you can bang away on it, and the expert users can bang away on your version of it to ensure it meets their needs -- perhaps discovering that some of the features in the old program were not really used thereby simplifying the task.
Each step has been away from the "absurd" position of simply copying a program which was, in a sense, a "spec" for itself.
At the other extreme, we get to the problem of "write a program that will make me as rich as Bill Gates". Note that this specification is not very specific.... it is very far from being source code for a program you can simply copy, isn't it? Guess what that says about the accuracy of the schedule?
So a lot of this hubub about estimating software schedules is really hubub about the nature of the program specificiation process.
comment on the posts (Score:4, Insightful)
It looks like several people (well, more than several) posted responses without reading beyond the lead-in. If you're one of them, yes, the argument here is in the general ballpark of "software estimation is hard or impossible", but it actually says something more specific than that.
The article does NOT say the following:
The article DOES say
From this, it does NOT conclude either of the points 1,2 above. Instead, it concludes:
Now some of the response posts, paraphrased:
No, it does not say this.
It also does not say this.
No, the article distinguishes subjective and objective estimates, and specifically discusses the case of an objective estimate with bounds in detail.
Ok, but slightly off topic: the article is specifically talking about those who claim objective estimates.
And where did you get an objective estimate of the complexity of a new project? Read the article...
Yes, you are. Your boss is monitoring you, get back to work.
Certainly. The 'manufacturing' camp of software estimators (Humphrey quote in the supplementary material [idiom.com]) say or hint that software construction can be made into a repeatable, fairly boring process where projects are always on time and programmers are like factory workers. This may or may not be true (I don't think it is), but regardless: to make this view seem more science than philosophy some of these people have fallen into the trap of cloaking their estimating process with formal notation and claiming or hinting objectivity. This part is wrong.
On the contrary, [conclusions to the article and the supplementary material]:
Re:Double the number, add one and raise the unit! (Score:4, Funny)
Rule of Threes (was Re:Double the number) (Score:2)
I learned this about 25 years ago, while at a startup that was trying to build a computer out of the then-hot 6502-class microprocessor. The company tanked, never fully delivering. The smarter folks there (alas, there were not enough common-sense smarts where needed, just comp-sci-smarts always looking for another feature) knew the real Rule of Three:
Take the amount of time you think it should take. Triple it. Then increment the unit of time.
So three days is nine weeks, two months is six quarters.
Double plus one is, well, just too optimistic. Of course there are a lot of people who understand a "rule of three" that forgets to increment the unit, so the rest is just quibbling.
But hey, Microsoft did finally deliver something labeled Cairo (X P)! Lessee, that was due in what, 1995?
And Linux, while ten years old, still manages its desktop (rendering, fonts, etc.) somewhat worse than the Win95 GDI did. Nobody's immune.
Re:Type of project (Score:2)
I once has a person stand up and tell me that his organization could estimate sizable projects with great accuracy. After some questions, it turned at that the projects were essentially the same thing again and again. Duh.
On the other hand, I usually do a pretty good job of estimating costs and schedules, even when there are some significant unknowns. So perhaps the situation is not as had as some people are making it sound, either.
Strongly agree (Score:2)
You have a shippable system every 2-4 week cycle.
Each new cycle nets more features.
Re:Well organised software projects.... (Score:2)
--john
Re:Optimism and ego as a source of underestimation (Score:3, Interesting)
Carleton Sheets, a man who was talking about how to buy real estate on his instruction tapes said something useful which I decided I can use in estimating time requirements for various fixes:
We need to learn to ask for the proper amount of resources and point out that less than the minimum makes it impossible to respond within the requirements no matter how much someone wants it to happen. (As Brooks points out, it doesn't matter how many women you throw at the task it still takes 9 months to produce a baby. Demand the baby be brought forth in less time and you either get a dead fetus (and possibly mother) or a sickly premature baby.) We need to learn that this is not a good idea because if you are consistently wrong on your estimates, eventually you get the "kid that cried wolf" syndrome: nobody believes you any more and all of the estimating systems become what everyone knows they are: a joke. It's actually no wonder "most" projects end up being cancelled. They take too long (because the people who are supposed to implement them were too aggressive in what they would deliver) and cost too much (because they routinely run overtime because the estimate was wrong in the first place).Paul Robinson <Postmaster@paul.washington.dc.us [mailto]>