Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Are There Limits to Software Estimation? 225

Charles Connell submitted this analysis on software estimation, a topic which keeps coming up because it affects so many many programmers. Read this post about J.P. Lewis's earlier piece as well, if you'd like more background information.
This discussion has been archived. No new comments can be posted.

Are There Limits to Software Estimation?

Comments Filter:
  • by laserjet ( 170008 ) on Friday January 11, 2002 @12:05PM (#2823498) Homepage
    We all know that software schedules, etc. can be estimated, but not with a large degree of accuracy. It has always and will always just be a case of risk management, and whether you want to release early to market, or release late and have a better product.

    In the real world, we don't go by some estimation or rigid schedule, and we wouldn't have to if not for the accountants and marketing people that have to prove their usefulness. THEY are the people who want estimates, and incredibly, they are also the people who have the least idea as to what is requred.

    • In the real world, we don't go by some estimation or rigid schedule, and we wouldn't have to if not for the accountants and marketing people that have to prove their usefulness. THEY are the people who want estimates, and incredibly, they are also the people who have the least idea as to what is requred.

      Not always the case. You're thinking about, essentially, a 'shrinkwrap' environment. Early/late to market and all that kind of stuff. However, a large amount of code is written in-house for in-house use only. Not software houses, but places like banks, government...basically any large institution with a fair few IT systems.

      Your users there aren't developers, and many are more used to work which follows predictable patterns. That's not to say their work isn't hard or even non-technical, it's just that the nature of many tasks tends to be more predictable than that of writing code.

      Most code is utter drudgery. It's predictable in an informal manner to a very high degree. These users get used to that. If you then predict three days on something, hit an actual problem and take three weeks - they will hold you to account. Your explanation at that time had better be good.

      Cheers,
      Ian

      • by dubl-u ( 51156 ) <2523987012&pota,to> on Friday January 11, 2002 @01:49PM (#2824226)
        Most code is utter drudgery. It's predictable in an informal manner to a very high degree.

        Quick tip: If most of your coding is utter drudgery, you're doing the wrong coding.

        The potential drudgery I see tends to come from two sources: 1) users tend to ask for very similar things (e.g., a zillion slightly different reports), and 2) tools that are a poor match to the problem domain.

        For the first case, users with similar requests, you give the user control and tools to support that control. So instead of wasting your life writing report after report, write report-generating tools.

        For the second case, you gotta buy or build better tools. Or, possibly, learn to use the ones you have better. For example, if you're using an OO language, stop using your copy and paste keys. (Why? When you copy and paste chunks of code, you're saying that two things are very similar. Instead of copy-and-paste, abstract the problem using, say, inheritance or containment or delegation. Copy-and-paste yeilds maintenance nightmares.)

        Computers do drudgery without complaining, and they do it much faster than you. Make them do your donkey work!
        • Quick tip: If most of your coding is utter drudgery, you're doing the wrong coding.

          I vastly (but politely) disagree. Most of absolutely everything work-related is utter drudgery, not just code.

          ...instead of wasting your life writing report after report, write report-generating tools.

          But the users don't want to write reports - they have other things to do. Report writing is my job.

          Now, if you're saying that I should be writing report-generating tools that I can make use of - you're right. I try to do that - most of my reports output to XML and are then formatted into csv/HTML/whatever by a set of XSL rules. But if you're saying that I should be writing software that runs reports, I have to disagree.

          Point 2 I entirely agree with, and have no quibble with at all. Well except to note, as a matter of miffed pride, that I dedicate a large chunk of my development to wiping out other people's use of the copy and paste keys... :-).

          Cheers,
          Ian

          • But the users don't want to write reports - they have other things to do.

            Agreed! But if they can do it themselves in a few minutes or spend a few hours waiting for you to become available, most of them will do it themselves. Note the rise of Wal-Mart; people say they want service even if it costs a little more, but most of them really prefer cheap self-service.

            I'm not saying you can automate everything away and just have them mail you your check. But in my experience, about 80% of what people want from reports is very similar. Giving them the power to do that easily (via, say, simple web-based report generators and data browsers) lets you devote your attention to the 20% of your report-writing that's actually interesting.

            Report writing is my job.

            Nope! Your job is to help people to get the information they need to do their jobs better.

            Your boss may think your title is Senior Report Writer, but there's no need to confuse him by telling him the truth. By focusing on the need rather than a particular solution, you can have more fun and give the business more value for their money.
        • Exactly. At a place I worked at users in every department wanted tons of reports in vastly different ways. Solution? Make simple views of the database (with the complex joins and stuff already done), buy seats of crystal reports, teach one or two users in each department to use (two day class). Crystal is drag and drop. Not hard. They loved it.
    • In the real world, we don't go by some estimation or rigid schedule, and we wouldn't have to if not for the accountants and marketing people that have to prove their usefulness.

      In the real world, companies have to be profitable and keep clients happy by delivering projects on time.
      • I understand your point, but the point is it is like trying to look at a piece of glass 20, 30, 50, or 100 feet away and tell how long that piece of glass is. Sure, you can get pretty close most of the time, and that's all you can do is give your best guess. Therein lies the problem. And at different distances, it may be even hard to guess the appropriate time. And the further out you go, the harder it is.

        I know in the real world we all have deadlines to meet, but also in the real world there are unexpected problems, weird people, equipment failures, sabotage, and a number of other factors that are just too hard or don't make sense to calculate when estimating. You can't calculate random occurences, you can calculate MTBF's and such, but even that is just a guess.

  • by Nijika ( 525558 ) on Friday January 11, 2002 @12:06PM (#2823502) Homepage Journal
    There are always things you won't consider until something's being developed. If you've done something a thousand times, and have the libraries developed then you can probably estimate the time required very accurately. If the request is something completely new to your team, you'll never be able to accurately estimate the time required until analisys (which takes it's own time as well).
    • A technique I think would work well for medium sized projects is build and burn.

      You build the project once through, cobbling things together to get as close to what you want in a given time frame. You then start again from scratch.

      Version two of any software is always better, so get straigh on with it. Involve the user towards the end of the initial build.

      You then spend time assessing how you would do it properly, hopefully having had a majority of 'niggles' highlighted during the initial sloppy build.

      I often do this for smaller projects, but think it could scale pretty well. If you spend 20% of the total build time on the initial build - but that lets you estimate the total time more accurately, there are great business benefits to be had.
      • If you want to build an application that the user wants/needs and will use, you need to involve them towards the beginning not the end. The cost of fixing a mistake (in terms of man-time) is dependent on when the mistake is made and when it is discovered i.e. if a mistake is made during requirements definition e.g. a feature left out, it is much cheaper if the problem is found out during review of requirements definition than if it is discovered during the final acceptance testing. Steve McConnell recommends building a user interface prototype which you get approved by the actual users and which then becomes part of the requirements definition.

        Fred Brookes said "plan to throw one away." Meaning your first attempt at solving the problem may turn out to be a crock of s**t. Somebody else said "if you plan to throw one away, you'll end up throwing two away."
      • You then spend time assessing how you would do it properly, hopefully having had a majority of 'niggles' highlighted during the initial sloppy build.

        I read once about a phenomena a lot of people have observed, namely 'second system syndrome'. This is where the second system would be over-engineered and excessively grand, due to all the problems that had been encountered in the first version. 'We've done the first version, we can handle this, so let's make a proper one now'.

        I have seen this - it would point to the third version being about right =).

        thenerd.
      • Thats not the way CMM works, with a structured process you get things right the first time. I admit it is a huge pain in the butt. But it certainly increased our productivity, and scheduled releases. I beleive that there are only 2 Level 5 organizations in the US, NASA, and the ALC(Air Logistics Center?, at Ogden). And at level 5 they don't even need a testing organization to release mission critical applications.
      • Yep! The "first pancake" approach is a good one. And, as other posters have noted, a venerable one. Why? as you point out, you discover a lot as you go, both about the user needs and about how to do them well.

        But you can go further with this! Any short-cycle iterative model will give you this sort of feedback every couple of weeks, rather than at the end of every version.

        For more info on it, take a look at Rapid Development's [construx.com] discussion of evolutionary delivery. Or try a process that's built around feedback, like Coad's Feature Driven Development or, my personal favorite, Beck's Extreme Programming (XP) [jera.com] (dumb name, but a great process).
  • by FortKnox ( 169099 ) on Friday January 11, 2002 @12:08PM (#2823506) Homepage Journal
    There is only one way to make a good estimate on a software project:

    Experience

    It looks to me like someone just had too much time on their hands, and decided to say that in a very, very complex manner.

    Sheesh.
  • Can be best based on these [sunspotcycle.com].
  • Metrics and processes worry me to some extent on this particular topic, because often times it seems that managers think that anyone can apply a few algorithms to a set of data and come up with an estimate.

    What's truly important is that intuitive feel that people develop over time for what the bottlenecks will be, how their particular organization operates, etc, etc...

    You can teach number-crunching, but you can't develop that intuition without experience.
  • by frankie_guasch ( 164676 ) on Friday January 11, 2002 @12:13PM (#2823535)
    Rapid Development : Taming Wild Software Schedules
    by Steve C McConnell
  • Take a look back (Score:2, Informative)

    by BaltoAaron ( 242546 )
    This has been posted before here. [slashdot.org]

    • This has been posted before here. [slashdot.org]

      Hey... you're right... and I seem to recall having seen a link to that before, too! Let me think...

      Oh, now I remember. The top of the page, second sentence of the submission.

      Thank goodness we have you here to help weed out redundancy.
  • 2x+7 (Score:5, Interesting)

    by Lizard_King ( 149713 ) on Friday January 11, 2002 @12:15PM (#2823544) Journal
    In a software engineering class in college, I remember a professor joking around that the catch-all equation for software estimation is 2x+7, where x (can be in any units like hours, days, weeks, minutes) is your estimate for how long you think the component will take. So for example, If one of your developers estimates that developing some component will take 4 hours (so x = 4), in *reality* it will take them 2x+7 = 15 hours to complete.

    After gaining a few years of "real world" experience in software engineering (and I know that the very term real world experience is debatable :-), I'm realizing that this professor wasn't that crazy, and his crude estimation mechanism (which is a joke) isn't any more or any less accurate than a lot of modern techniques I have seen people use in the field.
    • Comment removed based on user account deletion
    • I double the initial estimate and go to the next higher unit.

      When you factor in specification meetings, design reviews, actual coding, testing, and finally release (lather, rinse, repeast) it can be remarkably close.


      SuperID

    • You may look at it as four hours, but I look at it as 1/10 of a work week. Does that mean that in reality it will take 7.2 weeks?
    • Heh heh. The "rule" that I learned from my first year engineering professor was that you take the estimate, x, and the actual time (and/or cost) will be kx, where k is some number between e and pi (e ~= 2.7183, and pi ~= 3.1416).

      It's not a bad rule. Engineers (and programmers) tend to think that things cost a lot less and take a lot less time than they actually do.

      Cryptnotic
  • Getting a price tag for software development is like knowing how much you're going to pay to build for a new house. Software is incredibly expensive to build. Any professional needs to be able to say: it will take so and so long and that means such and such a price tag.

    The risk and uncertainty stem IMHO from two factors: the importance and rarity of talent and skill (a really good programmer can work ten times faster and produce a finer result than a 'normal' programmer); secondly, the inventive nature of much software development. When you make something new it's impossible to know what surprises you will get.

    The more one works with standard pieces and the less one depends on extremes of talent and skill, the more predictable software development is.
  • my last company i worked for... we developed a proprietary program for internal use only to catalogue what we had in a "nice, friendly" environment (it was a procurement company).. of course the code would never fully be completed, but it was a very long and tedious procedure due to QA... in my opinion, i usually think QA has probably one of the biggest roles with development.. without them we'd be releasing buggy programs left and right
  • by CDWert ( 450988 ) on Friday January 11, 2002 @12:16PM (#2823554) Homepage
    I have been in this industry for what often times seems too long, My father was in from the beggining 1962, When I was younger and he asked me how long I thought it would take to write I blurted out my answer and he said no X , I said noooo thats way too long how did you arrive at that ?

    Here was his answer I have ALWAYS found it accuraye to +/- 10% so far on hundreds on small to massive projects.

    1. Once you know all , or most of the forseeable estimates take that number. say 10 hours.

    This number is an instinctual reaction to a perfect enviroment , a little experience, some ego on your part of what might be accomplishable in a vacum.

    2 Take that Number ad double it.
    This takes into account all the real world distractions. Events, etc.

    3.Take that number and double it again. This takes into account unssen variables and events beond mortal control.

    40 Hours.........

    I use this on EVERY single estimate I provide, WHY ?? It works, its not too high not too low, just right.

    I tell people this and they laugh, then I tell them that there are MANY legacy applications SSI, IRS, FBI, you name it that were qutoed by my father in this EXACT manner.

    There is NO practical limit to estimation, As long as you have the information neccesary to determine what the job youre actually doing is.
    • by jpbelang ( 79439 ) on Friday January 11, 2002 @12:40PM (#2823692) Journal
      The danger with constantly doubling is that it leads to falsely large numbers for small projects.
      A project estimated at one day should NEVER take four days. A project estimated at three months could take a year.

      In my opinion, everything is about risk and you seem to agree (the reasons you double your time is generally for unforseen events).

      So if risk is the problem, we have to reduce risk. How should this be done ? The simple solution is shortening your horizon.

      Instead of saying "this project of size X will be delivered in three months", deliver smaller increments more often ("this project of size X/12 will be delivered in one week")

      This is extreme planning.

      So I'm an XP evangelist. sue me :)
      • > A project estimated at one day should NEVER
        > take four days.

        Erm, right. Your programmer gets run over - you have to get someone else in to do it, you have to find them, interview them, get referees - now the project has taken over a week.

        You may estimate 1 days worth of work - but in reality you have to try and plan for the unexpected, which is what project planning is all about.

        Write a list of everything that "could" go wrong, and any bets someone, somewhere has had it happen to thier project. Just a quick 2 minutes thought and I come up with :

        1) Personal Problems - Birth, death, illness
        2) Transport problems - Train strikes in the UK for example
        3) Electrical problesm - Power cuts, workmen cutting power lines
        4) Servers dies - backups are off site and will take a day to recover

        They may not all happen to the same project, but if they do you will wish you doubled that estimate again.
    • by pubjames ( 468013 ) on Friday January 11, 2002 @12:45PM (#2823717)
      I use this on EVERY single estimate I provide, WHY ?? It works, its not too high not too low, just right.

      A task will expand to fill the time available... That's why your method of estimation always seems just right.
    • I prefer the addage: times 2 plus 1

      where you take a programmer's estimate and double it
      then incriment the label by one unit

      so if you think it will take 2 hours,
      it will really take 4 days.

      or if you think it will take 4 days,
      it will really take 8 weeks.

      If you think it will take 1 month,
      it will really take 2 years.

      Programmers almost always seem to be overly optimistic, poor time managers, procrastinators, or over confident in their own abilities.

      Of course it gets more complicated when you have more than one person programming. For every person on the project beyond 1, add 10% total time.

      (If you really use these figures you're a moron.)

    • I typically use something like the following to counteract my instinctual response to undercut myself. I've found that because seperating work from home life is so difficult, not assuming I'll always have that time for the project makes it hard to make correct estimates.


      Let x be the first number in days that pops into your head when you think about how long a project will take.

      If the project requires you to work with someone else at any stage (including QA), let x = x*2.

      If you usually work on weekends or after hours (or before hours in case of a flipped engineers schedule), let x = x*1.5.

      If you are in a bad mood when the number of days pops into your head, you weren't really paying attention to what the project was or you simply aren't in the coding frame of mind, let x = x / 0.9.

      If you haven't felt modest in the last 24 hours, let x = x * 1.2.

      If the project requires any type of documentation, let x = x*1.2.
  • Gut level formula (Score:1, Interesting)

    by Anonymous Coward
    Look at the initial proposal for at least a few days if it's major.
    Take your gut feeling about the development time required.
    Multiply that by 20.
    Divide the result by the number of years you've been developing similar projects.
  • by foo(foo(foo(bar))) ( 263786 ) on Friday January 11, 2002 @12:17PM (#2823563) Homepage
    I work on a very large software project. 6million+ lines of active source code, with 400,000 new development hours per year and growing. -and- we are on our extimates well over 80% of the time. (if we don't hit it, we are under).

    How is this possible? It has evolved over time, some of the same people who started this project 9 years ago are still here and they know the system very well. That knowledge, combined with good project management leads us to several categories. During a requirements phase, designers assign a complexity to the changes for a module, and based on the type an hours extimate is generated.

    Now, Lewis is right, no algorithm can be developed to figure out the compleixty, but a human can, and the computer can figure out how many hours should be devoted to documentation, coding, and testing.

    My overall point, as a software product matures...esitmates are easy to estimate and project dates are easier to meet. But you already knew that...
    • I suppose his conclusion is along the same lines as that there is no algorithm that can determine whether another given algorithm is absolutely correct for any given input.

      Basically, you try your best to make it perfect, but you can never be 100% certain.

  • I haven't read the article (I plan to), but from the post, I get the impression it is more for Project Manager and Exec types. Experienced developers know first hand how hard it is to estimate development time.

    What I would like to know is, how is this going to effect expectations from non-technical people in charge of projects that demand "accurate" estimates. I've had good and bad managers, maybe these kinds of articles will help make developers life a little less stressful and more flexible.

  • by Anonymous Coward on Friday January 11, 2002 @12:19PM (#2823572)
    Programmers that I've worked with have almost always intuitively known this to be true, and non-programmers (in particular, product managers responsible for scheduling) have almost never understood this. Hence the frusteration on both sides.

    I'm glad there's finally a resource to help the folks who insist on accurate estimates understand why my response to the inevitable inane question is always a cynical "two weeks", regardless of the complexity of the problem.

    • Art VS Engineering (Score:3, Interesting)

      by Srin Tuar ( 147269 )

      Programmers that I've worked with have almost always intuitively known this to be true, and non-programmers (in particular, product managers responsible for scheduling) have almost never understood this.


      Those in the "Programming is an Art" camp tend to agree that there is no real way to estimate how long doing something new is going to take.


      Those who think of programming as simply bulk engineering, repetetive, boring, or just "coding" tend to be frustrated by this seeming fact. It is almost irreconcilable with normal business practices to know how long a job will take until it is actually done. This makes it extremely difficult to make close-ended contracts, and to predict budgets.


      Asking how long a particular software job will take is often equivalent to asking how long a research job will take.
      Im sure the scientists would be amused if a suit walked down into R&D and asked them when they would be "done" ;)

    • As a programmer-turned-product manager, I know this, but unfortunately, can't do much about it to help my R&D team.

      If I want magazine ads, I need to have those complete 2 months in advance -- no changes. If I have existing or potential customers who are waiting for a new version of our software before buying, I need to tell them when it will be ready so that they can accommodate us into their timelines. "It'll be ready when it's ready" is generally not an acceptable answer.

      We put in buffer time for our releases, but at some point we have to commit to a time frame to someone. If our dev time exceeds estimates, we (at best) look bad or (at worst) lose a lot of sales -- if we can't deliver, customers will go elsewhere.

      I sympathasize with our dev team, and (depending on the situation) I'm willing to move the release date or drop features if we needed, but sometimes the cost of making those adjustments is huge. At some point, I need an accurate estimate.

      I understand the problem, but it still doesn't help anything. Frustration on both sides still.

  • by (trb001) ( 224998 ) on Friday January 11, 2002 @12:21PM (#2823583) Homepage
    A problem that seems to come up in scheduling and time estimation is that the people producing the estimates aren't the people doing the actual work. Add onto that the customer giving additional requirements, changing requirements mid project, putting together a team that doesnt have the skills necessary to produce on time deliverables, etc...that's a LOT of variables.

    I don't want to sound like that programmer who makes excuses for why their project isn't delivered on time ("That other guy was a moron", "Management is horrible", "We didn't have solid requirements") but IMHO, if you want a program delivered on time, pick a good team and then try to estimate the amount of time it will take...then reduce that by 20%. It seems like every project is late by about 25% or more, so if you reduce it initially, perhaps it will be delivered closer to when you really expected it.

    --trb
    • trb001 writes:

      A problem that seems to come up in scheduling and time estimation is that the people producing the estimates aren't the people doing the actual work.

      A good project manager will make sure to ask developers for their own estimates and make use of that information.

      Add onto that the customer giving additional requirements, changing requirements mid project,

      If you are estimating from a documented specification, then these changing requirements can be estimated and documented. "It will take us six months to complete this specification, if you want to add this feature it will add two months to the schedule, just how important is this feature again?"

      putting together a team that doesnt have the skills necessary to produce on time deliverables

      Good people are hard to find, but the project manager can be reasonably expected to know who is working on the project and how competant they are, and plan accordingly.

      ...estimate the amount of time it will take...then reduce that by 20%.

      Are you kidding?!? Don't reduce the time estimate, even if everything goes perfectly, the chances of you getting everyone to do what they said in 80% of the time is almost zero. Generally, depending on the complexity of the project, I take my best estimate of how long it would take in a perfect world and increase it by 100-200%. People call in sick, unexpected complications arise, there's all sorts of things that will increase the schedule, and those are the hard to predict parts of estimation.
      • A good project manager will make sure to ask developers for their own estimates and make use of that information

        ...and...

        Good people are hard to find, but the project manager can be reasonably expected to know who is working on the project and how competant they are, and plan accordingly.

        ...are DREAM statements in my experience. When contracts are bid, most (many?) are bid before the team has been completely assembled. The project I'm currently on I came in on as a sub contractor years after the start and 6 months before we delivered a release (which ended up being about 7-8 months late). Our staff has changed a bit even since I came on board. Hirings, firings, quitting, etc, change the way a project not only delivers, but works together. You get an annoying QA guy that rejects everything you want to implement, that's bad. You get an annoying QA guy that implements EVERYTHING everybody implements, that's just as bad.

        One idealogy that contracts (government contracts especially) seem to embrace is "If we put more people on the project, it looks like we're trying to produce more work." AND THIS ACTUALLY PASSIFIES GOV. AGENCIES! My project was a good case in point, we added a ton of people around the same time I came on and the productivity...dropped, because not only did we have to do our work but the work of the other people (who aren't on the project anymore, I might add).

        ...estimate the amount of time it will take...then reduce that by 20%.

        Let me rephrase/clarify...you bid X amount of time. (because to be honest, the people that place these bids ALWAYS underbid time...they're competing, after all, on who has the lowest cost/time/best design, and extensions are easy to get than contracts). You then have an internal deliverable schedule that is reduced by 80%. IME, people will always be late, but they will try to hit the mark, so if you attempt to finish at 80% time, you may actually make it by 100%.

        Maybe if you had a reliable team ahead of time, could garantee they were all sticking around from the beginning to the end, had competent managers and especially designers and a time frame that was reasonable, you could apply an algorithm to find out what kind of schedule you could deliver within. And if you ever find that, please email me...I want in :)

        --trb
        • OK, you're dealing from the world of bidding for contracts, I'm dealing with the world of in-house development. That's where our differences seem to lie.

          In in-house development, you know who is on staff already, and you can ask people for opinions during the specification phase, and estimate according to the actual skill levels of actual people. These not only aren't dreams, but they're expected.

          What you describe as reducing the schedule by 20% I would call increasing the schedule by 25%. Semantics aside, I don't think you are making enough of a jump between the time people say their work would take and the time their work is likely to take in real world conditions.
    • people producing the estimates aren't the people doing the actual work.

      Absolutely true. Not only are the people doing the work more likely to give good estimates, but people also work much harder to meet their own estimates rather than somebody else's numbers.

      estimate the amount of time it will take...then reduce that by 20%

      Absolutely false. This is the worst thing you can do for morale. The programmers will know they are working to bullshit targets. And then when they miss the fake deadlines, they'll be stressed and grumpy for the last 20% of the work, meaning you'll get that last part slower and with poorer quality then you otherwise would have.
  • This reminds me of a paper I came across on the limits of formal methods (http://www.kneuper.de/a-limits.htm).

    You can prove philosophically the limits of mathematical methods,. but that doesn't make them useless. A formally-proved system, when put in contact with an informal world, may show itself to have limits, but it'll probably perform better than a system that's not been formally proven, and if it does fail, the reason for the failure will be glaringly obvious.

    We build systems of ever-increasing complexity with tools that are constantly playing catchup. Does that mean we ignore the tools? I don't think so. Instead, we reflect and improve.
  • Can't all programmers have the same deadline that the guys at DukeNukem Forever have?


    "It will be done, wheneven it is done"

  • by johnburton ( 21870 ) <johnb@jbmail.com> on Friday January 11, 2002 @12:29PM (#2823626) Homepage
    In real life it's rare to be asked for an estimate of the time required.

    What usually happens is you get told roughly what to build and the final date by which it needs to be ready. There then takes place a series of negotiations and compremises on the scope of work until everyone is "happy".

    I suppose that doesn't really invalidate the point of the article at all, it's just an observation for those who think that estimation is the nice science that it is sometimes presented as being.
    • Yes, people want dates not lengths of time from the IS department. However, within the IS deparment, estimating the length of time a project will take is a critical piece of information to have when figuring out that date to tell people (or when to tell people the date they imposed on you just won't happen).
      • He's saying more than that.

        In my experience, most people turn up with both a rough list of features and a rough target date. It is very rare that they will want 100% of the features without regards to the date; instead, people are willing to cut features to bring things in on a schedule that suits them.
  • by Baba Abhui ( 246789 ) on Friday January 11, 2002 @12:33PM (#2823651)
    I'm going to have a great reply to this important story. It's going to have all the latest stuff - it will be broken down into paragraphs and have a high degree of relevancy. My reply will be ready in two weeks, give or take a month or so, if the powers that be decide it also must contain links and be spelled correctly.
  • The kind of development being done is going to have
    a large part to play in how well the time and budget can be estimated. Projects that build applications automating known systems using strong toolkits should be more estimateable than leading edge mathematics and science driven projects.

    When this comes up I always think or the evil officer pointing his gun at the scientist and saying "You will launch the new rocket by midnight or I will kill you." As if somehow stress will make the careful scientific work go faster.

    I also wonder if this is a chaos problem. If someone could make really good estimates would knowlage of the estimate effect how the project is carried out causing the estimate to be wrong?
    Or would being able to make good estimates cause management to under estimate even more often.

    I would like to see results of projects estimated by a independent party that does tell the primarily parties of the results till after the project is finished.Would these estimates be correct more often.
  • With all the hype surrounding XP/Agile/Name of the Week type development, I've been looking for hard numbers on how much better it is against older development styles. So far, I haven't been able to find anything accurate. It really comes down to multiple projects vs. performance. The is no hard data yet on the speed of XP against all project/component types. My biggest concern with this is that some manager will read up on XP, read a line that says it cuts your development time/cost by some percent and then draft a memo and adjust all targets.

    Do these sort of numbers exist out there yet and is it even worth doing them given the theme of the article? Thanks.
  • Someone needs to tell the fine author of this article to RTFA [idiom.com]. Once you've read that, you realize this entire article is a straw man argument. When you know all of the factors involved, and you know FROM EXPERIENCE the likely production times of various parts, you can estimate with some degree of accuracy. But the point remains that whenever you have ANY unknown area in your project, you never know what you're going to find when you overturn that last stone.

    And that's the point of a healthy pessimism in estimates; when the estimates are good, it's a matter of experience, not methodology. As you read through the comments on this article, you'll notice that everyone who has a method that sounds really sensible is relying on experience and the input of programmers, not on a pure methodology.

  • THe best way to estimate is to give a shorter time frame, this makes the developers work harder (faster?), if you give a large time frame, they work slow. no im not management, im a developer, I jsut call it like it is. You always see people scramble and whip out code right before a deadline.
  • The trouble is that people always leave things out of the schedule. For instance maybe 30% of those reading this post are supposed to be writing software right now, but nobody in the schedule does it say "time spent pissing about on /.: 2 weeks".

    Stupid topic, it depends what you're doing, duh: nth ecommerce site - predictable,
    anything interesting (which by my definition means something that hasn't been done before) - unpredictable

    The first rule of software schedules is things always take least twice as long as you think, even when you allow for the first rule.

    Or to put it another way, the first 90% takes 90% of the time, and the remaining 10% takes the other 90%.

    So, it's actually stupid to try to produce a valid schedule. If you estimate 2 weeks it will take about 4. You might think it smarter to change the estimate to 4 weeks, but then it will take 8, so you may as well estimate 2 weeks and be done with it.
  • Companies get certified in SEI-CMM (the Software Engineering Institute Capability Maturity Model) to get that government contract -- and then they quickly abandon or pay lip service to the CMM principles. The whole point of SEI-CMM is that you have to have a non-dilbert organizational structure in order to achieve "maturity" resulting in the organization's being "capable" of developing more stable code and of achieving more control over project costs and schedules. The irony is, the companies who need SEI-CMM certification the most, the big government and defense contractors, tend to be the same companies who foster immature corporate policies such as frequent mass layoffs, no training, illusory stock option programs, a culture of blame, and lousy HMO plans that don't cover anything.
  • I don't think anyone believed that in a true, practical sense, any type of project in any industry could be quantified precisely. That said, many programmer/managers don't even take the common sense approach to scehduling.

    First hint is to collect resonable metrics - even if you can't estimate a project, at least make sure you have some data to go on for the next one. Like defect rates, how much code is being generated or fixed per day, and so on.

    Secondly, get programmers on the team to provide some tight resolution of how much they expect to get done...not in a month, but in a day. After a few days they'll start to understand how quickly they actually work and their estimates will get better.

    Most of all, attach dollar amounts to things you do. Don't spend $1000 in engineering time to save $10 in computer time. Learn what resources are cheap and what resources are expensive.

    There are many other tidbits which a common sense to most working programmers, but it doesn't seem that anyone employs them.

  • Me: "Ah... I couldn't do it in less than 3 months captian."
    Boss: "I need in 2 at the latest!"
    (6 weeks later) Me: "It was rough, but I'm finished!"
    Boss: "You're a genius!"

    Actually, you tell him the real time, and he assumes you are padding and wants it twice as fast!

  • as bizarre as this might sound, we are about to try a new estimation method here involving dice!

    we practise extreme programming [extremeprogramming.org] (XP). To give you some context - we break software development down into 1 or 2 week iterations and break the work down into stories. These stories are written by people like myself, customers, and are estimated and by developers/engineers. Estimates range from half a day to about 8 days for the work. The benefit to the customer is in XP is visibility. Doing these small chunks of work we are always able to have a snapshot of the project and can massage it into place if things are looking bad.

    so now to the point...
    we are still finding that estimates, even though in bite/byte-size chunks, can be inaccurate. what we are looking at making the programmers do is give us a high and a low estimate for the work. We then roll a dice. If you roll a 1, you take the lower limit, a 6 the upper and so on.

    this may sound totally crazy, but - i dunno - i've seen people spend months analysing a workschedule and still be MILES off the mark in terms of an estimate.

  • by Markee ( 72201 ) on Friday January 11, 2002 @12:50PM (#2823750)
    In the real world, any effort estimations are irrelevant anyway. I am sure everyone working in the business knows this situation:

    Project manager says: "We have to add line item X to the project. What's the effort estimate for that?"

    Me: "Twelve weeks."

    PM: "But we need it in three weeks."

    Me: "No way."

    PM: "We have to. Shoot for" (names target date in three weeks).

    Me: "Sure."

    The due date is fixed, and the software development effort is determined by the available time afterwards.
    • by dubl-u ( 51156 ) <2523987012&pota,to> on Friday January 11, 2002 @02:38PM (#2824618)
      And generally the way we accomplish something in impossible times is to cut corners. Sure, it works in three weeks, but the code is snarled, there is no documentation, and you took advantage of a security hole to make it go.

      Now of course you tell the manager, "If I spend three weeks on a temporary hack, I'm still going to have to spend another twelve weeks later doing it right."

      And they say, "Sure! As soon as this crisis is past."

      Of course, as soon as crisis A is done, crisis B is looming. And after B, then C, D, and E. So a lot of 'temporary' code gets written. Eventually, the project is just a big heap of steaming turds with some pretty contact paper covering most of the surface. And then the good programmers catch on and leave; the bad ones spend the rest of the lives sticking on more contact paper.

      And the manager, of course, has long since moved on; he met his deadlines, after all, so he must be a good manager. And the person who's now in charge of that group? Well he must be a bad manager, because his team has lots of bugs and never makes deadlines anymore.

      It's enough to make me cry.
  • *Half a year* after this article was published, this guy finally comes around to say "Yes, you *can* meet deadlines"?

    I suppose it'd have been more ironic if he immediately produced an article that agreed with the original.
  • I maintain that good programming is about 2/3 planning and about 1/3 development. So yeah, it's impossible to accurately schedule without a complete plan, because until you've done this, you don't know everything that needs to be written.

    With a bit of experience under your belt, you can approximate up front, but anything claiming to be more accurate than an order of magnitude is somebody blowing smoke.

    That said, an honest and honorable programmer will always do one of two things: (1) swallow his or her pride and give the high end of the above estimate, or (2) knock as much time off the high estimate as he or she is willing to compensate for by putting in the extra hours UP FRONT to deliver in the timeframe promised.

  • In my last shop we spent a lot of our time working on acheiving CMM level 2, which was harder then it sounds for a bunch of hackers. As we were approaching our level 2 goal, Carnegie Mellon offered us their Personal Software Process class. It ties in closly with the rules and guidelines set by the CMM, but it focuses on your own abilities. By keeping metrics on time spent coding or time spent documenting you can learn the rates at which you are able to work, and can apply that to estimates with reasonable accuracy.
  • The paper 'Large Limits to Software Estimation' intermingles program size and minimal program size. The latter can not be created/computed within reasonable time.

    So, in general, the time necessary to program a piece of software of minimal size is well-known. Thanks to Kolmogorov complexity.

    As far as I know there are no specific K-complexity proofs about the (time/space) complexity of programs larger than minimal size.

    This invalidates the conclusions in the paper. So keep on creating and testing software metrics!

  • Any formal estimation method, if possible, even if only partial, would still require a formal description of the task, which is, in my experience, the first and foremost problem/art/craft in software engineering. Once the task is adequately defined, the remaining work is, by and large, downhill.

    Ok, the definition of "adequate" may kick off a few debates, especially with management... which, in my experience, is the central "problem" in software engineering. Management, that is.

    Hmmm...
  • by Reeses ( 5069 ) on Friday January 11, 2002 @12:53PM (#2823776)
    Every article I've read on this overlooks one thing that every programmer requires a small amount of.

    Creativity.

    It's something that's hard to be measured. Sadly, programming is not like assembling a car, where it can be broken down into infinitesimally smaller chunks, then added back together to get a whole.

    For example: it takes six seconds to put this screw in place, so we'll stop the assembly line for 8 seconds, then the car moves on regardless, under the assumption that the screw was inserted.

    Programming is not like that. I know I've stared for an hour at the screen trying to figure out why one line of code wasn't working.

    Or sat there for a while trying to figure out how to approach a problem before writing another line of code.

    Likening programming to a production line is not good. There's no way to know in advance how many lines of code there are going to be, nor how long each line is going to be. If you knew this, you could add up how long it would take the average person to key in the strokes, and there's your estimate. That doesn't work in software.

    For time usage, software needs to be compared to any other creative process as opposed to a mechanical one. How long did it take daVinci to paint the Mona Lisa? An hour? Two? 3 days? Could he have guessed from the outset that it's going to take x amount of time? Probably not. He might have been able to give a ball park based on how fast he's painted similar stuff in the past, but he couldn't nail it down exactly.

    Now, granted, as you develop time and experience, your estimations get better. In addition, yor time to completion gets better. (How long do you think it would have taken daVinci to paint a _second_ Mona Lisa? A lot less time than the first one, because he's done one, and he remember how he solved various problems, like how much of each color to mix to make a certain tone.) This is where talent and experience come in.

    But until software becomes similar to assembling Lego bricks (which it will, one day, and has in some places), then it's going to be hard to quantitatively determine how long a given project will take. And even if it becomes like Lego stacking, there's still going to be some fudge factor because how to solve the problem has to be solved before solving the problem.

    And sometimes you have to tear apart and start over because a brick is out of place, or it's just poorly designed.
  • by corvi42 ( 235814 ) on Friday January 11, 2002 @12:57PM (#2823798) Homepage Journal
    In my experience, the biggest snags in all time estimates have to do with the under-determination of what a project is and what it involves. Given any project F which has only F(x) parts to it, you usually have some rough intuitive estimate that there will be G( F(x) ) bugs to work out. Given that you are familiar with the type of project involved the estimations are generally fairly decent.

    The big problem is that in real-world applications, x is always changing. I have found that the culprits of this is mostly one of several things:

    1) You're not as familiar with the project as you thought you were - or there are some aspects that are familiar, but the unfamiliar ones have ramifications you don't foresee because you're not familiar with them. This adds to both your estimations of F(x) and G(F(x)).

    2) Users are dumber than you thought. The difference in mindset between the user and the engineer is real and very significant. There are things that as an engineer ( especially one who is working closely to a piece of code for months on end ) you would never try to do with a particular application, and yet a user who has never seen it before will do out of ignorance or confusion or both. Just when you think you've made something idiot proof - they invent a bigger idiot. This throws off your estimates of G( F(x) ) because you have whole classes of bugs you never thought of as bugs before. Sometimes this requires reworking core components making estimates of F(x) go wrong.

    3) The client either doesn't know what (s)he wants, or doesn't know how to explain it, or even that it is necessary to be explained. This is the most frustrating of problems, and can be fatal to entire projects. Often clients don't think of software engineering like real engineering. One cannot ask an architect to redesign a building after its already 3/4 built. But this has happened to me with software projects, and even on occasion prompted me to quit a job in frustration. When this happens, all bets on estimates are off.

    Either that or I'm just really lousy at doing time estimates =)
  • by pubjames ( 468013 ) on Friday January 11, 2002 @12:58PM (#2823803)
    As someone who has to provide estimates to different clients for different types of jobs on a frequent basis, I have to say that I don't think it is as difficult as some people make out.

    The secret is to base your estimate on a detailed specification. Specify in detail, break down the big task into smaller ones, estimate for each smaller task, add up, add 10% for contingency.

    I think the problem is that too many estimates are made on the basis of poor specifications, then you get a shock when you discover a problem you haven't anticipated. So, my top tips:

    1) detailed spec agreed with client.
    2) breakdown into smaller tasks.
    3) estimate for smaller tasks.
    4) add up and add 10%.

    All this stuff about doubling etc. - what are you people like? If you have to do things like that then perhaps project estimation isn't something you should be doing...
    • Your method is certainly better than just doubling. However one thing you haven't taken into account is that on large projects the detailed specification is a significant proportion of the work. Also if the prjoect lasts for many months the specs invariably change during that time - sometimes a little, sometimes a lot.

      Don't get me wrong I'm not criticizing your method if it works for you. But there's no getting around the fact that for large (tens of person-years of effort or more) software projects - estimation is a tricky task.
    • If this actually works for you, you are very lucky.

      First, detailed specs aren't available in most cases. Clients want a solution, and they don't want to be bothered with gathering requirments, reviewing proposals,etc. They believe that's your job, not theirs. Even if you get a buy-off on a spec, it will be done without much thought and will change once they see your solution.

      Second, the estimates of the smaller tasks will be wrong. You will have to rely on people who are intimately familiar with the code to make the estimates, and some of them aren't good at estimation. If you try to do it all yourself it will take you years to even comprehend the existing source-code base.

      Third, sickness, turnover, etc., will cause unexpected delays in many cases.

      Fourth, unexpected problems---a bug in a vendor tool that has to be worked around, for example--- will cause further delays.
  • is PHBs who want the software done yesterday.

    I develop web applications in a small town. My boss comes to me and gives me specs on some new project. I look over them and give him a quote, say 40 hours, he then proceeds to laugh and say that the client will never pay that much for the app. So we spend an afternoon looking at what we can cut, trying to reuse code, maybe take out a feature or two here and there and come up with half the quote (20 hours) which I tell my boss we can make unless problems arise.

    As with all development, problems arise, the client complains about X feature stuff gets redone, the code ends up being a huge mess and usually takes 1 and a half times the original quote.(60 hours). Yet my boss still doesn't figure it out. Why? Probably because his boss keeps breathing down his neck to cut development times as well.

    What's worse is when a sales person or my boss talks to a client and gets them to agree on a list of features and the time it takes to develop before even consulting me. Last month a client wanted a content managment system for a website, discussion forums, polls, etc. Because of certain features it couldn't just be downloaded and I ended up just writing it. The client was charged 25 hours, it took closer to 80.

    Anyway its the PHBs that cause the problem
  • by mrroot ( 543673 ) on Friday January 11, 2002 @01:09PM (#2823866)
    I figure out how much time it will take me to just sit down and do it without any interruptions.

    Then I multiply that by the number of DBA's I have to go through to have a table get created for me divided by two.

    Then I add to that the 10 times the number of project branches I need to request the PVCS administrator to create.

    Then I count up the number of consultants sitting within 50 feet of my desk and multiply by that number times 20.

    Then I multiply that number by the number of status reports I have to submit per week.

    Finally, I add to that the number of games of foosball I play per day on average * 10.

    That number is the final number of days it will take to complete the project.
  • By speeding up development the estimation of time it takes will be easier to get a grip on.

    I don't claim to be a programming language creator, instead 3000+
    languages in less than 50 or so years should be enough to figure out that
    the limitations of programming languages are not going to be solved by
    creating another one. But rather in making use of the various languages
    where they best fit, thru an action set that enable the creation of
    automation of language use.

    Comments from the LL1 article [slashdot.org]

    USPTO Article specific reference is here. [mindspring.com]
    Three Primary User Interfaces [mindspring.com]

    The need for speed and language barrier to break:" [slashdot.org]

    What's beyond the language barrier: [slashdot.org]

    What I have found odd about the Virtual Interaction Configuration as I've
    attempted to explain it to others over the years, is that there is an
    extreamly strong tendancy to preceive in it terms of their individual and
    specific mindset focus. i.e. if one is focused into prolog, they preceive
    it as a prolog function set, which causes problems in correctly
    understanding the actual general action set.

    It's possible that communication of the VIC to Carl Sassenrath triggered
    off the creation of what is now called REBOL [rebol.com]. And it's also very possible
    that SHEEP has as well gotten inspiration from the VIC.

    Noodle baking...

    SHEEP article [osnews.com]

    Another SHEEP article [amiga.com]
  • Plan for everything to go wrong, then revise it against stuff that goes right.
    You get a project and say, this will be done in 5 years.
    In 3 months you get 50%done, you say "Good News" it should only take 2 years total"
    then when your done in 6 months "great news, we came in 4.5 years a head of schedule, and underbudget! where's my bonus?"

    You can not solve without all the vsriables, and as long as here are people writing software, and people requesting software, there will always be unknowns.
    Of course if there was one global class/function global repository where every one in the world can get a function/class in there language of choice, and it was open, time management would be come very easy, and development time would drop.

    of course, this won't happen, or will it...
  • One of the things I've always noticed about estimation of software projects is very often there's a lack of formal feedback loop. I've never personally experienced a project 'post-mortem' where the accuracy of estimates was assessed. I've spoken to others who say "well we had something a bit like that but no-one takes it seriously, after all by then the project is over"

    Surely if estimation is based on experience (and we know it is) then that experience needs to be recorded in some formal manner?
  • Is that while the "software engineering" profession (academics, members of the ACM and IEEE, and large, primarily software-oriented companies) would be ecstatic to find such a method, the non-IT companies and managers which employ *most* software developers would not.

    In other words, they will hear this as "close to all of our software projects will be within estimates if we follow method X."

    However, because of their own perceived business needs (which may even be correct to an extent; remember, just as we're the presumed software experts and should be given the benefit of the doubt as far as understanding software engineering principles, they are the presumed business experts and should be presumed to understand *their* business and markets), the likelihood of actuall *rigorously* following method X gets considerably lower. This goes primarily to time-to-market considerations and changing requirements. Changing requirements are *inevitable*, particularly in initiatives where a non-IT company is trying to use technology to enhance their traditional business. Additionally, if we accept that a good understanding of the problem domain is one of the complexity factors that affect the likelihood of success of software projects, staff turnover and the loss of people within the IT infrastructure of the company who have a good undestanding of the problem domain will also tend to have a negative effect on the predictive success of a methodology in such an environment.

    So when the inevitable failure occurs, the method (and by extension the profession) will still be percieved to be unreliable. This will especially be the case if this is an early effort in the organization. The reaction of the business people is likely to be (intuitively, even if they realize the illogic of their interpretation of statistics) "hey, your method predicted 80% success rate, but this is our second project, and it FAILED. That means we only got a 50% success rate. Your method sucks."

    Finally, even the criteria for evaluating the "successfulness" of a software project will differ between sponsors of a project and the architects of said project in this environment. In the evaluation of the sotware engineering industry, a project that was delivered on time, within budget and with a high quality but too late for a market which changed underneath it, is a "success" according to the terms of the methodology, but to the business people who sponsored the project, it will likely be viewed as an unmitigated failure.

  • by tmoertel ( 38456 ) on Friday January 11, 2002 @01:37PM (#2824126) Homepage Journal
    The real limiting factor is not the estimating, itself, but knowing what to estimate. For example, imagine that you had the Holy Grail of estimating -- a perfect estimating function f. Given some work X to be performed, f(X) would tell you exactly how much effort the work would require -- perfectly, every time.

    Now, what good is f? On most software projects, f wouldn't be worth much. Why? Because nobody knows what X is. X is a specification of the work to be done (i.e., software requirements), and most such specifications are woefully incomplete, imprecise, and erroneous.

    That's why development processes that are repeatable and emphasize increased formalism allow for better estimates. They provide higher-quality X values, not to mention better approximations of f based on past performance. Therefore, if long-term estimates are important to your business, climb the formalism ladder.

    On the other hand, good long-term estimates are often unnecessary. Many business need only to know where the project is now and to be able to change directions with reasonable efficiency when business needs change or realities are better understood. Witness the effectiveness of so-called agile development processes in turbulent business environments.

    So, in the end, the only real lesson is to pick your software development (and estimating) process to support your business. Doing it the other way around usually doesn't work.

  • My critique (Score:4, Interesting)

    by The Pim ( 140414 ) on Friday January 11, 2002 @01:48PM (#2824222)
    Very nicely stated! I was going to publish my own response to "Large Limits", but I honestly decided that the paper was too "academic" (in the sense of, "interesting but irrelevant to the practical world") to be worth critiquing. But this is slashdot, and what better place for worthless thoughts, so here goes ...

    The glaring flaw of the paper is that the main argument can be applied equally to any human endeavor, not just to programming. The argument is essetially a rigorous version of the statement, "You can't (in general) know how hard (complex) it's going to be, until you do it". The author supports this by pointing out that the purpose of any program is equivalent to generating a string that is a complete, precise description of the problem. Complexity theory tells us you can't predict the length of that program (without a formal system bigger than the program).

    But it's not hard to cast any problem into this form. Take baking a cake. The problem can be thought of as generating a precise description of how to turn some inputs into an output within the range of what we consider a cake. In a reductionist sense, that process is incredibly complex (much more than any computer program), involving gazillions of elementary parcticles and their interactions. But nonetheless it's pretty easy to estimate how long it will take to bake a cake.

    Complexity theory shows us that complexity is indeed pervasive in general; but everyday experience shows us that it is usually encapsulated within simple abstractions. Most things we plan and do have relatively simple descriptions in terms of objects with those properties we are familiar, and things we have done countless times before. So while estimating complexity may not be possible in general, it is usually not very hard for the things we care about.

    In order for the paper to be persuasive, Lewis must show that computer programming is, in practice, more complex than most other activities--that new problems can't be easy stated in terms of already solved problems. (He does begin to address this, but only as a side-note.) I think most practitioners would essentially agree (and I'm not going to argue this, unless someone challenges it). What does this mean for the relevance of complexity theory? It's a deep and difficult question, but I suspect that some insights can be drawn. In particular, I do believe that there are problems that can't be estimated without effectively solving them.

    Regardless, there are more obvious, intuitive reasons that complex activities are difficult to estimate. One is that that humans vary wildly in their efficiency at complex tasks. We all know the experience of cracking nut after nut one day, and being stumped the next. Sometimes, to be sure, this is due to misestimation of difficulty, but just as surely it is often purely psychological. Another is that teams working on complex problems are prone to miscommunication and other group disfunctions. A third is simply that the flesh is weak--we often lack the discipline and concentation to plan our projects in sufficient detail.

    And this list only considers the difficulties that derive from complexity. Software development faces a host of additional "accidental" challenges, such as bugs in third-party software, clients (and marketers) that change their minds, changing fashions in tools and methodologies, etc. In short, you don't need a fancy theory to conclude that predicting development time is quite hard!

  • by Alsee ( 515537 ) on Friday January 11, 2002 @02:28PM (#2824534) Homepage
    Few people are familiar with the term "Kolmogorov complexity". It is basicly the length of the shortest possible solution (sequence of symbols). Sometimes refered to as "algorithmic complexity". It proves that, except for a constant term, the complexity of a problem is independant of what method or language is used to process the symbols. Except for a constant term, Lisp, C++, Basic, and Perl all yeild the same complexity for any problem.

    Lewis's proof if based on a mathmatical proof that the Kolmogorov complexity is impossible to predict (without actually solving the problem).

    One objection was that for some "Kolmogorov simple" problems it may take a human a long time to find the short solution, and that for some "Kolmogorov complex" problems the long solution may be obvious to a human.

    It got me thinking. If we fudge the definitions a bit, Kolmogorov complexity still applies. "Thinking" is just another method or language for processing symbols (thoughts). So the Kolmogorov complexity is the length of the shortest sequence of thoughts required to solve a problem. In the general case it is impossible to predict the length of the shortest sequence without actually solving the problem.

    -
  • From the analysis:
    Lewis claims there simply cannot be any objective method for arriving at a good estimate of the complexity of a software development task prior to completing the task. He uses "objective" to mean a formal, mechanical method that does not rely on human intuition.

    Okay, so Lewis doesn't conclude that good estimation isn't possible. He simply says that it's always going to require human intuition. So software engineers can't easily be replaced by some good AI in an app or by a robot. Big deal. Many critical tasks in many professions fit this definition. Doctors, lawyers, chefs, investment managers, etc. The best ones often distinguish themselves with intuition.
  • by jgore26785 ( 460027 ) on Friday January 11, 2002 @03:23PM (#2824927)
    To tell you the truth, I would tell customers/superiors that I can give them very accurate software estimates as long as they don't change project parameters on me after I start.

    This whole estimation thing assumes that the project parameters do not change during development, which I have never come close to seeing happening on any of the projects I've been exposed to. Ahh.. to be able to work on a project on a fixed set of parameters..

    There are the changes that people can never seem to stop making during product development, and they originate from: marketing, sales, superiors, customers, warehouses and factories, just to name a few.

    Of course, there are also the factors that you can't predict ahead of time (and consequently, cannot quantify besides adding a qualitative factor) such as changing: product costs, product availability, product specifications, competition, benchmarking, and tool quality/availability.

    The best thing I've found is to keep software simple, sweet and very amiable to changing design and specifications. Software estimation is very much an intuitive feel based on past experience; there are also certain characteristics that you know will throw uncertainty into the schedule. For example, not only do I give my superiors at work a "time estimation", but I also give them an "uncertainty" or "risk factor" that tells them how close I feel my time estimation to be. They can learn a lot when you tell them "4 weeks give or take a couple of days" or "4 weeks if it's feasible to do at all".
  • by epine ( 68316 ) on Friday January 11, 2002 @04:54PM (#2825635)

    I've been playing around with the bitkeeper source control system for the last week. After reading this article I suddenly recalled that bitkeeper treats 2-way merge and 3-way merge as entirely separate features. N-way is not even discussed.

    In some ways N-ways is merely a simple generalization of 2-way. The algorithmic complexity is not much different. The problem here is human scale. Humans cope well with two-way merge as a daily activity, cope with 3-way merge at the level of focus required by air-traffic control, and don't cope with 4-way merge under any sane circumstance.

    Bitkeeper solves the problem by designing the architecture so that merges can be performed hierarchically. This is a feature that CVS sorely lacks.

    Everyone knows that the success of projects is to a large measure determined by whether the architecture obviates the need to delve into N-way hell.

    I also recall a project where a database supported two processes which concurrently updated the same dataset. During the design process we found a way to define the system so that each process was permitted to update a distinct set of columns, with maybe a column or two where one process was allowed to set a value and the process allowed to clear the value. Months of potential development effort slashed at the stoke of a pen. The first design dealt with the concurrency problem in a different way. Getting everyone to respect everywhere the subtle rules required by that design just doesn't happen on most projects.

    The best book on the subject is the psychology of everyday things [jnd.org]

    What people tend to forget is that nuances of a software design create affordances with respect to the coding effort. When the pressure is on, people tend to grab onto the nearest handle. The handles hidden in the design have a momentous impact.

    Some of the most important affordances are second order effects [slashdot.org].

    The C++ language is often criticized for having a model of class protection which is easily violated. Yes, that's true as a first order criticism. However, the C++ makes it fairly easy to figure out a way to manipulate the source code to find all the violations if you decide to look. These manipulations might be a temporary modification for the sole purpose of determining whether a certain kind of integrity exists. The C++ community doesn't lose any sleep over the first order weakness of the class protection model. We all know that violaters are playing a dangerous game.

    On the other hand, there are certain kinds of abuse in the C language where it's practically impossible to turn up the smoking gun short of a complete source code audit.

    The difference is not that C++ prevents programmers from abusing abstractions, but that it provides the necessary affordances to catch the people who do. The importance of these second order effects is vastly underestimated by those who plan.

    You can see the extent of the problem wherever mouthy mights thrive. You know the people who always shout "it might happen" when the downside of anything they oppose is mentioned, as if might was an adverb of quantity. The implicit logic is that only a first order guarantee is sufficient, yet the recent study shows what everyone already knows, second order affordances generally suffice.

    My experience is that projects are a morass of non-quantifiable psychology, experience, and intuition. The second order effects are never discussed on paper. It's left up to the cohesion of the team to impose the second order effects that make the first order effects possible.

    It would be far more useful for the estimation people to spend their time figuring out the conditions under which a project becomes non-viable. Offer the programmers some kind of handle for coming back to management with their concerns about faulty second order effects, in language a whole lot less vague than what I'm using.

    Wouldn't it be a fine start just to be able to limit ourselves to projects where the outcome is somewhat proportional to the effort expended. If we had proportionality already, the kind of estimation we have now would be a second order concern in its own right, rather than a masturbatory mission impossible.

  • Scotty: You didn't actually tell him how long it would really take,didja lad?
    LaForge: Of course
    Scotty: You never tell them how long it will really take. Captains are like babies, they want everything right now.
    LaForge: But isn't that wrong?
    Scotty: How else do we get the reputaion of being miracle workers?

    (With apologies to Star Trek)

  • People forget delays, but they will always remember failures. It's human nature. Do you remember how long it took for Apple to get OS X out? Chances are, you don't. Do you remember Apple's pre-1997 "next generation OS", Copland? Utter failure.

    There are tons of other examples.

    Cryptnotic

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...