Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Are There Limits to Software Estimation? 225

Charles Connell submitted this analysis on software estimation, a topic which keeps coming up because it affects so many many programmers. Read this post about J.P. Lewis's earlier piece as well, if you'd like more background information.
This discussion has been archived. No new comments can be posted.

Are There Limits to Software Estimation?

Comments Filter:
  • Take a look back (Score:2, Informative)

    by BaltoAaron ( 242546 ) on Friday January 11, 2002 @12:13PM (#2823536) Homepage
    This has been posted before here. [slashdot.org]

  • by squaretorus ( 459130 ) on Friday January 11, 2002 @12:16PM (#2823553) Homepage Journal
    A technique I think would work well for medium sized projects is build and burn.

    You build the project once through, cobbling things together to get as close to what you want in a given time frame. You then start again from scratch.

    Version two of any software is always better, so get straigh on with it. Involve the user towards the end of the initial build.

    You then spend time assessing how you would do it properly, hopefully having had a majority of 'niggles' highlighted during the initial sloppy build.

    I often do this for smaller projects, but think it could scale pretty well. If you spend 20% of the total build time on the initial build - but that lets you estimate the total time more accurately, there are great business benefits to be had.
  • by jpbelang ( 79439 ) on Friday January 11, 2002 @12:40PM (#2823692) Journal
    The danger with constantly doubling is that it leads to falsely large numbers for small projects.
    A project estimated at one day should NEVER take four days. A project estimated at three months could take a year.

    In my opinion, everything is about risk and you seem to agree (the reasons you double your time is generally for unforseen events).

    So if risk is the problem, we have to reduce risk. How should this be done ? The simple solution is shortening your horizon.

    Instead of saying "this project of size X will be delivered in three months", deliver smaller increments more often ("this project of size X/12 will be delivered in one week")

    This is extreme planning.

    So I'm an XP evangelist. sue me :)
  • by uqbar ( 102695 ) on Friday January 11, 2002 @12:48PM (#2823733)
    Alistair Cockburn [aol.com] has a number of excellent papers on this point:

    The net-net is that human factors are far more important - and it's really hard to plug these into an estimate. One of Cockburn's contentions is that people aren't linear or predictable. But he also identifies items that can help a project run more efficiently. An excellent read at any rate.

  • by mccalli ( 323026 ) on Friday January 11, 2002 @02:00PM (#2824305) Homepage
    Quick tip: If most of your coding is utter drudgery, you're doing the wrong coding.

    I vastly (but politely) disagree. Most of absolutely everything work-related is utter drudgery, not just code.

    ...instead of wasting your life writing report after report, write report-generating tools.

    But the users don't want to write reports - they have other things to do. Report writing is my job.

    Now, if you're saying that I should be writing report-generating tools that I can make use of - you're right. I try to do that - most of my reports output to XML and are then formatted into csv/HTML/whatever by a set of XSL rules. But if you're saying that I should be writing software that runs reports, I have to disagree.

    Point 2 I entirely agree with, and have no quibble with at all. Well except to note, as a matter of miffed pride, that I dedicate a large chunk of my development to wiping out other people's use of the copy and paste keys... :-).

    Cheers,
    Ian

  • It's real simple (Score:-1, Informative)

    by Anonymous Coward on Friday January 11, 2002 @02:14PM (#2824416)
    There are lies, damn lies, and project schedules. It doesn't matter if you're developing hardware or software. If you are a technology company it is your job to make something new that no one else has done before. How can you know how long it will take to do something no one has done?

    Take your software manager's estimate, and trust that a) he's telling you the truth to the best of his abilities and b) his team told him the truth to the best of their abilities. Take this estimate to your marketting staff, and send them out to get customers. Trust that they are a) getting customers interested and b) finding out what the customer REALLY needs and when they need it.

    Keep your development team informed about customers and deadlines, but don't take more than an hour a week for it. Make sure they feel incentivized to make the schedule, remember: salaries people expect for an 8 hour workday, bonuses people work hard for.

    In the end tech workers are better paid because you're paying them to mitigate risk. You pay a factory worker minimum wage because you know exactly what he can and cannot do. Treat your developers well, hire the best you can, and that's all you can do.
  • by epine ( 68316 ) on Friday January 11, 2002 @04:54PM (#2825635)

    I've been playing around with the bitkeeper source control system for the last week. After reading this article I suddenly recalled that bitkeeper treats 2-way merge and 3-way merge as entirely separate features. N-way is not even discussed.

    In some ways N-ways is merely a simple generalization of 2-way. The algorithmic complexity is not much different. The problem here is human scale. Humans cope well with two-way merge as a daily activity, cope with 3-way merge at the level of focus required by air-traffic control, and don't cope with 4-way merge under any sane circumstance.

    Bitkeeper solves the problem by designing the architecture so that merges can be performed hierarchically. This is a feature that CVS sorely lacks.

    Everyone knows that the success of projects is to a large measure determined by whether the architecture obviates the need to delve into N-way hell.

    I also recall a project where a database supported two processes which concurrently updated the same dataset. During the design process we found a way to define the system so that each process was permitted to update a distinct set of columns, with maybe a column or two where one process was allowed to set a value and the process allowed to clear the value. Months of potential development effort slashed at the stoke of a pen. The first design dealt with the concurrency problem in a different way. Getting everyone to respect everywhere the subtle rules required by that design just doesn't happen on most projects.

    The best book on the subject is the psychology of everyday things [jnd.org]

    What people tend to forget is that nuances of a software design create affordances with respect to the coding effort. When the pressure is on, people tend to grab onto the nearest handle. The handles hidden in the design have a momentous impact.

    Some of the most important affordances are second order effects [slashdot.org].

    The C++ language is often criticized for having a model of class protection which is easily violated. Yes, that's true as a first order criticism. However, the C++ makes it fairly easy to figure out a way to manipulate the source code to find all the violations if you decide to look. These manipulations might be a temporary modification for the sole purpose of determining whether a certain kind of integrity exists. The C++ community doesn't lose any sleep over the first order weakness of the class protection model. We all know that violaters are playing a dangerous game.

    On the other hand, there are certain kinds of abuse in the C language where it's practically impossible to turn up the smoking gun short of a complete source code audit.

    The difference is not that C++ prevents programmers from abusing abstractions, but that it provides the necessary affordances to catch the people who do. The importance of these second order effects is vastly underestimated by those who plan.

    You can see the extent of the problem wherever mouthy mights thrive. You know the people who always shout "it might happen" when the downside of anything they oppose is mentioned, as if might was an adverb of quantity. The implicit logic is that only a first order guarantee is sufficient, yet the recent study shows what everyone already knows, second order affordances generally suffice.

    My experience is that projects are a morass of non-quantifiable psychology, experience, and intuition. The second order effects are never discussed on paper. It's left up to the cohesion of the team to impose the second order effects that make the first order effects possible.

    It would be far more useful for the estimation people to spend their time figuring out the conditions under which a project becomes non-viable. Offer the programmers some kind of handle for coming back to management with their concerns about faulty second order effects, in language a whole lot less vague than what I'm using.

    Wouldn't it be a fine start just to be able to limit ourselves to projects where the outcome is somewhat proportional to the effort expended. If we had proportionality already, the kind of estimation we have now would be a second order concern in its own right, rather than a masturbatory mission impossible.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...