Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Programming IT Technology

Can Software Schedules Be Estimated? 480

Posted by timothy
from the now-stop-abusing-the-mozilla-team dept.
J.P.Lewis writes " Is programming like manufacturing, or like physics? We sometimes hear of enormous software projects that are canceled after running years behind schedule. On the other hand, there are software engineering methodologies (inspired by similar methodologies in manufacturing) that claim (or hint at) objective estimation of project complexity and development schedules. With objective schedule estimates, projects should never run late. Are these failed software projects not using proper software engineering, or is there a deeper problem?" Read on for one man's well-argued answer, which casts doubt on most software-delivery predictions, and hits on a few of the famous latecomers.

"A recent academic paper Large Limits to Software Estimation (ACM Software Engineering Notes, 26, no.4 2001) shows how software estimation can be interpreted in algorithmic (Kolmogorov) complexity terms. An algorithmic complexity variant of mathematical (Godel) incompleteness can then easily be interpreted as showing that all claims of purely objective estimation of project complexity, development time, and programmer productivity are incorrect. Software development is like physics: there is no objective way to know how long a program will take to develop."

Lewis also provides a link to this "introduction to incompleteness (a fun subject in itself) and other background material for the paper."

This discussion has been archived. No new comments can be posted.

Can Software Schedules Be Estimated?

Comments Filter:
  • Sure they can... (Score:3, Interesting)

    by Mike Schiraldi (18296) on Monday November 05, 2001 @10:10AM (#2522319) Homepage Journal
    As they say, the first 95% of a software project takes 95% of the time.

    And the remaining 5% of the project takes another 95% of the time.
  • by Organic_Info (208739) on Monday November 05, 2001 @10:16AM (#2522357)
    "1) As long as it takes."

    As long as it takes to get it right. This to a point is a barrier opensource S/W does not hit to a large extent as development is continual till no longer required.

    An interesting question is when will Linux/*BSD development stop? Will it be surpased by an/other projet(s) or evolve to perfection?
  • Right place to Ask (Score:2, Interesting)

    by maroberts (15852) on Monday November 05, 2001 @10:23AM (#2522400) Homepage Journal
    Like any public forum Slashdot has a wide range of readers, a large number of whom actually work in the software engineering field [myslef included].

    Anyway my personal theory based on blind idealism is that it is extremely difficult to get an estimate for completion right; short term goals are fairly easy to predict, because you have most of the information you require to make those predictions, but longer term estimates are much more of a wild guess. I personally thing its a consequent of chaos theory - a butterfly flutters its wings in Brazil and your software project instantly takes another two years! More seriously small errors in estimating components of a large project can induce large errors in estimating the time and resources needed to complete the whole project.

    Linux is right with its "release when ready" motto. Since it is impossible to tell when it will be ready over such a wide range of groups and interests, you have to pick your release moments when they happen, not try and force them to happen.
  • by Smallest (26153) on Monday November 05, 2001 @10:27AM (#2522417)
    where'd you get that idea?

    i've always thought most /. readers are programmers and IT people who come here to kill time at work.

    -c
  • by KyleCordes (10679) on Monday November 05, 2001 @10:38AM (#2522507) Homepage
    My firm also does some work on a fixed-cost basis, with similar good results. I also borrow many ideas from XP.

    A key to fixed-cost is that it takes practice. Try it on a small scale before you commit to it on a larger scale, to avoid large-scale failure...
  • by markmoss (301064) on Monday November 05, 2001 @10:42AM (#2522534)
    To accuratly plan a software release you must have the project, and all it's complexities and nuances down COLD. otherwise you are not giving an estimation, you are giving a guess based upon incomplete knowledge.

    The bulk of the work of programming consists of getting all the complexities and nuances down cold. Once you really and completely understand what is required, coding is trivial.

    This leads to a thoroughly unrealistic method of estimating software costs:
    1) Work for months on the specs.
    2) Get the customer to sign on to those incredibly detailed specs, even though he doesn't understand them.
    3) Go and code it, no spec changes allowed.

    8-)

    The article mainly talks about the mathematics of estimating complexity. This is a lot like the proof that you cannot determine when or whether a computer program will end -- it's true for pathological programs, but it has little relevance for the real world. You try to write the code so the conditions for the program to end are clear. If it gets into an endless loop, you probably got a conditional expression backwards and you'll recognize it immediately once you figure out which loop is running endlessly... Likewise, there may be well-defined specifications for which it is impossible to estimate the coding time, but the usual problem is poorly-defined specs, which obviously makes any estimate a guess.
  • by gorilla (36491) on Monday November 05, 2001 @10:48AM (#2522581)
    I'd have to agree with this. There are two major problems, the first being that the users don't really know what they want and the second being that almost always, the problems being solved are new problems, and therefore it's difficult to know what solution will best solve the problem.
  • by Christopher Bibbs (14) on Monday November 05, 2001 @10:50AM (#2522589) Homepage Journal
    I'm not saying the majority of Slashdot readers are professional developers, but don't judge the readership on the first-posters.

    That aside, my experience in software development (only 3 years) ball parking (1-3 days, 1 week-3 weeks, 1 month-3months) is usually possible, but tends to become wildly inaccurate beyond a few months. Regardless of what methond we use to determine timelines, some things always seem to slip, while others take a fraction of the expected time.
  • by CaptainSuperBoy (17170) on Monday November 05, 2001 @10:53AM (#2522612) Homepage Journal
    the software industry hires anyone, and lets them get on with whatever they do, with no real management or oversight or planning.

    The software industry doesn't hire anyone. Software companies hire people, and a company that behaves like you described won't be around for long if software is their main source of revenue.

    Also, management != good software engineering. Planning != good software engineering. These are all factors that go into a good software project but people shouldn't think that if they draw class diagrams before they start coding, they're suddenly software engineering.

    On the other hand, you need to look at what's best for the project - it isn't always a large, formal approach to software, especially for small projects. Being too rigid can be as bad as being too loose with your design. I've seen projects design themselves into a corner before the first line of code is even written.
  • by LazyDawg (519783) <lazydawgNO@SPAMhotmail.com> on Monday November 05, 2001 @11:01AM (#2522664) Homepage
    Assembling software from reusable pieces requires three things that most software companies don't typically have:

    1. Discipline. Your average programmer will have read about various programming methodologies, but skipped past the parts which would make their code an easy-to-reuse template in lieu of fast development time. As with any gamble, you should know at exactly what point you want to quit, have an A-line for version 1.0's feature set, all that jazz.

    2. A big code base. Because of step 1, or maybe just a lack of previous projects, one's code base is typically limited to what you can find in a computer science textbook. Having a good database of classes and patterns that have turned out to be useful, and having easy access to this database for the information you need is the difference between a library and a code base.

    3. Incremental development. Throwing together a large software project, all at once, and then testing the whole thing is very tempting, and happens more often than most people like to admit. What should be happening is a series of incremental integrations into the final product, with unit tests of each part. Otherwise your large project can become a giant, complex nightmare. Making complex software shouldn't be made quite so complicated.

    While making a "software assembly line" takes slightly more work and trouble than your average car assembly line, it has incredible cost savings in the long run.
  • Politics (Score:2, Interesting)

    by a42 (136563) on Monday November 05, 2001 @11:15AM (#2522744)
    I work in corporate IT doing software development and implementations. The project that I'm currently working on started in July 2000. The system was scheduled to be fully operational on November 30th, 2000. In addition to missing that date we've also missed dates in December 2000, January, Mar, June, September, and October 2001. We may or may not actually make the December 2001 date that we've currently been given. Each and every one of those missed dates was chosen for political reasons. The question has always been "When does this business need this system?" and not "How long will it take?"

    After the October date was missed there was a meeting of all the Project Managers, Program Managers, Subject Matter Experts, and other people involved in the project. They worked round the clock for nearly 3 days to come up with a revised project plan and estimate of how long it would take to finish the system, test it, and bring it online. The number they came up with moved the date into late January. Executive management didn't like this and decided that the new date would be mid-November. Their plan for squeezing out these extra two months? Just make everyone work harder. Needless to say we've got a group that is completely burnt out and getting less done in more time. Nifty.

    As long as "suits" continue to make schedules based on business needs (read "corporate politics") and not based on the complexity of the problem this is going to continue to happen.

    --john

  • by StrawberryFrog (67065) on Monday November 05, 2001 @11:19AM (#2522771) Homepage Journal
    Where did you get the magic number, oh sage of the ivory tower? Well, we just made it up -- it seems to work.

    There is nothing wrong in principle with measuring what has happened in the past, and using that to predict what will happen in the future, before you discover why it works like that.

    For instance, if you measure that throughout the year, the average time between sunrises is 24 hours. You can use that number even though the only explanation for it that you might have is "it seems to work"

    Of course, when you apply this to software develpment time estimation, it falls down for a number of reasons. It's not constant across technologies. It's not constant across types of project. It doesn't take into account the variation in technological risks (ie if you have done something like this before, you will spend less time finding ways to do stuff). It doesn't scale linearly with the size of the project. It varies across individuals. etc. etc.

  • Unknown? (Score:2, Interesting)

    by King Of Chat (469438) <fecking_address@hotmail.com> on Monday November 05, 2001 @11:25AM (#2522814) Homepage Journal
    With software, the first part of that expression tends towards zero since most things we know how to do we can reuse code, whereas with building it remains a large accurate estimate.

    I thought that's where GoF patterns could help. When I've been asked to explain design patterns to PHBs, the analogy I've always used is structural engineering - eg. for a bridge, we could have: box girder, suspension, cantilever etc. Design patterns are just like that.

    Of course, in the real world, this is only a partial solution. Over 90% of software project failures are down to requirements. If we could get that right, then software development could, indeed be a "proper" engineering discipline. The only place it is, though, is where people are prepared to pay what it takes to get it right - flight control systems etc. IIRC, one of the few people to have achieved the SEI CMM [cmu.edu] level 5 are the lot who develop the space shuttle software. At the last count, their code was costing them over $1m a line. How many people would put up with what that would do for the cost of their text editor?
  • Components (Score:4, Interesting)

    by wytcld (179112) on Monday November 05, 2001 @11:28AM (#2522831) Homepage
    Very large and complex projects do get completed, sometimes even on-time/on-budget. Examples include skyscrapers, nuclear submarines, aircraft carriers, power plants (whether conventional or nuclear), oil refineries, B-747/A-320, etc. And all of these systems nowadays have a software component as well.

    Yes but. The important components of a skyscraper are steel beams. Put them up correctly, after calculating loads and stresses, and it doesn't matter what the twenty tons of stuff you have sitting on the 27th floor is. It doesn't matter if the beams come from different foundaries, either, because the specs are clear enough (dimensions, strength, where the bolt holes are).

    Now try putting together a typically complex business software solution, meshing a bunch of different, reasonably good, existing programs and components with some custom code and configuration. Even where there are reasonably good standards spec'd in some areas of the project, if you're not solving new problems it shouldn't be a software engineering project at all - it should just be system administration using the available solutions. That it's real software engineering means you're running into unpredictable surprises where the components at hand don't fit without a great deal of extra labor.

    A parallel can be found in work on the portions of the New York City infrastructure that are under the streets: We still have wooden water mains in some places from the mid-1800s, mixed with gas, electric, steam pipes, sewer, subways, gas lines ... most of which was not documented to current standards on either installation or subsequent changes, despite most of it being reasonably well done by the standards of its time (pretty amazing, those wooden water mains still working, right?).

    So what happens when we finally go in to improve one of the services - say, lay new water mains? Other stuff is found that's in the way where you didn't expect it, or that need's fixing on examination when you didn't expect it. Meanwhile you've got the street ripped up but you have to cap it again quickly or traffic is too snarled for too long. So a single block's 4-week project can stretch out for over a year - dig up the street, fix one problem, discover more, recap while designing and provisioning the next stage, repeat - because it's all stuff that needs to be done once you get into it, that can't be properly assessed until you get into it.

    Well, software in the real world isn't as old as New York, but if anything it's more complex, and the layers of crufty stuff that have to be accommodated in current projects are as considerable, and often as poorly documented by current standards (which will always advance so as to obsolete whatever we do now). Building a skyscraper, by contrast, is just a sysadmin job. Put the beams and bolts in the normal places, and it stands.

  • Theory vs Reality (Score:2, Interesting)

    by _J_ (30559) <jasonlives&gmail,com> on Monday November 05, 2001 @11:30AM (#2522839) Journal
    I've just finished working on an IBM RAC (Rapid Application Center) project. It was filled with elements similar to extreme programming, it used function counts, it seriously defined scope and development cycles. I've developed ideas about the method both in it's theory and it's method.

    In Theory:
    - All your resources are available to you when you need them for the length of time you need them.
    - The client is with you all the time so that they are available to comment on the direction development is going.
    - An enormous amount of time is spent in analysis to make sure the project goes in the right direction.
    - Every task is estimated and ranked and put into a timed, development iteration schedule. If time runs short for a specific iteration then lower ranked features are "descoped."
    The idea is that you have a fixed budget and a fixed end date and that based on these the one degree of freedom is the scope of the project. Therefore if anything changes it is the number of features.

    In Practice the theory is adhered to closely but other factors enter into the project like:
    - Scope Creep. This involves features that were ranked lower in the requirements and were descoped but become necessary for the end product to be useful or features that weren't caught by the requirements process but are necessary for the end product to be useful.
    - Requirements Interpretation. They were nailed down, or so we thought.
    - Budget. If the estimate comes in for 4 developers and a lead for 3 months but the budget only allows for 2 developers and a lead then there's an issue.
    - Resources. If the client can't or won't provide the resources you need to extract the inputs you need from other systems then your schedule will be thrown for a loop.
    - Client Participation. Asking 100% of you client's time in the project is an enormous request. And not always do-able.

    How could it have been improved?
    - The client could have provided the resources we needed. We were extracting information from some host databases and had a hard time figuring out what fields, rows and tables we needed.
    - Our BA's could have done a more thorough job on the requirements. There were things that were missed or weren't defined accurately enough. We developed integer benchmark times when two decimal places were required.
    - Our client could have sat with us to make sure what we were doing was what he wanted (which was what was originally agreed to). Nothing quite like having the client say that a particular feature was not quite what he wanted.
    - Us developers? Well, there are always things that could have been done quicker in hindsight. I did some java-scripting that - in retrospect - could have been a hell-of-a-lot more efficient. I aim to correct that when I get a momemt.
    - The function estimates were off and that caused some late nights and freaking out. It really is an art form.

    Overall, the model was nice but our lack of adherence to it caused us unnecesary grief. While the client got a product he could use the process would have been more satisfactory and less painful if we hadn't strayed.

    The lesson is that theory is all fine and dandy but it doesn't work if you don't follow it.

    IMHO, as per

    J:)
  • by pberry (2549) <pberry&mac,com> on Monday November 05, 2001 @11:33AM (#2522862) Homepage
    As long as your defintion of what you are doing is sane. Everyone who hasn't read Joel Spolsky's essays on software development should...not to follow like sheep, but mearly to gain perspective and see if any of what he says works for you.

    Painless Software Schedules [joelonsoftware.com] is a great one and you will get sucked in just following the links from this one essay.

  • Reductio (Score:3, Interesting)

    by hey! (33014) on Monday November 05, 2001 @11:42AM (#2522900) Homepage Journal
    OK, what I take it here is that you are talking about a method by which software project times can be predicted accurately. Suppose we had such a method. Since it is a method which takes inputs and produces outputs, it can be described as an algorithm. Since it is an algorithm, it be be represnted as a software program which predicts completion times. So far so good.

    Next get together a team of programmers. Set them to work on a program which determines proves {insert your favorite unsolved mathematical conjecture here}. It turns out you actually don't need the team at all, just run your software project estimator and if it comes out with a finite amount of time to complete the program, you know that the the conjecture is true.

    In other words your software estimator can be used to solve the halting problem.

    OK, this is a joke, but it points something about the question. I once had a CS professor who required that we right requirements statements for all of our assignments. She forbade us to include halting times, because "you can't predict whether a program will halt or not." To which I wanted to reply, "About that 'hello ,world' assignment..."

    The lesson is that there are some cases to which a rule like this applies and others to which it does not. There are some projects that can be estimated with simple tools, some that can estimated with complex tools, and some that are not practical to estimate at all. Even fairly seat of the pants kinds of estimates work pretty well on relatively simple problems, providing you break things down a bit and do an honest estimate the costs on individual deliverables and the individual functions you know you'll need to make them work. About the only methods that never work are pulling a number out of the air based on how much the project scares you, or using wishful thinking (whether the source is your boss or you). Nobody can give good estimates when you spring the question on them with no time to prepare. My boss's most (and my least favorite) questions start with "how hard would it be.." and my most favorite (and his least favorite) answers start with "It depends..."

    Nonetheless, my experience with past projects of the kind that I do means I can do a pretty good job with relatively unscientific tools, provided the problem is like one I've solved before. However if you are writing software for space flight or some other kind of highly complex mission, I could estimate until I was blue in the face and it wouldn't be worth a damn. You want to hire somebody with experience in such projects and who has methods of estimation well calibrated from similar past projects.

    I think the particularly difficult cases are ones inolving software maintenance -- extending software to perform things that weren't originally factored into the design, or adapting the software to run when the systems it depends upon change in some unpredictable way. These are cases where surprises can throw the best laid estimates well off.
  • by MarkusQ (450076) on Monday November 05, 2001 @11:42AM (#2522902) Journal
    The huge gotcha, that IMHO makes most if not all schedules fantasy, is that people talk about how long it will take to finish coding when what they are really interested in is the time it will take to have the code finished and debugged. Of course, the time it takes to have debugged code depends on things like:

    * A tester or test suite exhibiting the bug

    * Someone recognizing that it is a bug

    * Enough data being gathered to define the bug ("It hangs sometimes" or "I don't think the results are always correct" doesn't cut it).

    * Enough eyeball hours to find the bug (this in itself makes the process equivalent to solving a crime. Do we ask the cops to schedule crime solving?)

    * About two minutes (average) to devise and implement a fix

    This has to be done for N bugs, where N is unknown. People who think you can estimate software development schedules with any accuracy are either dreaming or assuming that they just have to estimate how long it will take to get it coded, not how long it will take to get it working correctly.

    -- MarkusQ

  • by jpeters77 (259155) on Monday November 05, 2001 @11:43AM (#2522904)
    But not a "hack" job. I've got a mere 7 years experience in the software world stemming from a traditional engineering background and I've seen projects that were "on time" and projects that failed miserably. The problem is EXACTLY the same as any other engineering problem if you choose to look at it that way.

    OK, I'm going to dive into the classic analogy to traditional engineering: the bridge project. Nobody ever answers the question "How long will it take to build a bridge?" right off the bat. Every aspect of the project is scrutinized and estimated separately. In other words, to build a bridge we need to do A., B., C., ...etc. Each task is then estimated along with the dependencies on other tasks and how an overtime task affects other tasks (Pert and Gant charts are dreamy for this). In the end you come up with damn accurate estimate of how long it's going to take along with heuristics that describe what external can make a difference and how big that difference will be.

    Now, back to the software arena. There is a big difference between a software developer and a software engineer. Software developers "hack" or piece together code that works, but there's been no real analysis done to support it (my definition - feel free to argue). Software developers are comparable to general construction contractors. For example, a contractor may build a deck without much analysis (i.e. how will it behave in an earthquake; what is it's failure temperature, etc.), but a major structure (like a bridge) requires an in depth analysis.

    A software engineer, on the other hand, follows a much more rigorous analysis and design technique that can be used to estimate the overall time a project can take. To do this, one doesn't estimate how long it's going to take to build the entire project. Rather, one should divide the task into sub-tasks and continue to do that until one ends up with tasks that are estimatible with a defined region of uncertainty.

    To do this, a certain amount of design needs to occur. Admittedly, the estimate for the design can sometimes be a shot in the dark. But, a good design can give not only a good estimate of the time required to complete a project, heuristics about the end product can be determined from the design. IMHO, the coding becomes an afterthought, a footnote to a good design.

    OK, I'm done ranting. Start the flames.
  • by Robert Baruch (11109) on Monday November 05, 2001 @11:47AM (#2522924)
    Roger Penrose, in Shadows of the Mind, puts forth a presentation of a proof via Godel's incompleteness and Turing's halting problem that shows that human understanding and insight cannot be reduced to algorithmic form.

    The Large Limits paper uses pretty much the same proof, but doesn't add Penrose's assertion that human thought can't be computable, and therefore algorithmic limitations don't apply.
  • by roffe (26714) <roffe@extern.uio.no> on Monday November 05, 2001 @11:48AM (#2522936) Homepage

    I am in the process of completing a research report on this very issue. the background is the engineering development project modelling software SimVision [vite.com], which we [dnv.com] have undertaken to modify for use with software development projects.

    the answer is yes, but it depends on a lot of things, because programmers are not like other kinds of engineers and software engineering is not like other kinds of engineering, to wit:

    • programmers should use programming languages they know (if a programmer on the project does not know the relevant programming language, exchange him or her for somebody who does ).
    • the project should be planned with constant changes in the specifications in mind. There should be clearly defined procedures for handling specification changes.
    • it is not always true that adding manpower to a late software project makes it later.
    • it is important that the manager knows how far each programmer has come. Programmers often signal way too late that they won't finish on time. Make clear milestones and follow them up closely!
    • use programmers who are familiar with several different programming languages and/or paradigms.
    • programmers who score high on IQ tests are more productive than programmers who score lower. similarly with programmers who score high on conscientiousness on Big-5 oriented personality tests. (there are some important corrilaries, such that there should not be two hi-IQ programmers on the same subteam because they'll never quit arguing about the best way to do something).
    • good managers finish on time because they cut corners. find out as early as possible which features can be sacrificed
    • programmers are often not very good at communicating, especially at communicating fears, doubts and possible failures. rewards for being honest early should be emphasized.

    it seems that managers improve their estimating skills by experience, so using experiences managers is a good tip.

    there's a lot more to it than this of course. unfortunaltely our report is confidential just now.

  • by zeus_tfc (222250) on Monday November 05, 2001 @11:52AM (#2522967) Homepage Journal
    That's exactly the sort of attitude that has caused the sort of spectactular failures of software projects to be accepted as the norm. Software Engineering is *not* "hacking" or "coding" or "programming", it's *engineering*, like building a bridge or a skyscraper. Yes, those projects go over time and budget too sometimes, but they are the exception rather than the rule.

    I agree with you up to a point. I am an engineer. I have worked in Process Engineering, at AMEC, and now work in Design engineering. I have not done much coding, but I think that software development probably relates most closely to design. As I said, I now work in design. In design you can estimate a schedule, but that schedule is dependant on our everything going perfectly the first time, which we all know doesn't happen. This does also not include problems with parts we have to design around, which we then have to wait on, or a change in requirements of our part. (Sound familiar yet?)

    This is all in the conceptual, design phase. This doesn't include the acutal production of a physical part. That all happens later, after our 3D model has been packaged correctly. Once the physical part has been made, then there are the joys of testing and testing and testing...

    What I'm trying to get at, is that I've experienced several forms of Engineering (Yes there are many), and I think that Software development relates most closely to Design. In design, there is no reasonable way to schedule out how long things will take. We just make an estimate based on what's happened in the past, and change things as we go along.
  • by cburley (105664) on Monday November 05, 2001 @12:00PM (#2523009) Homepage Journal
    Software development is not a science in the normal sense. Designing large software systems is an art. It cannot be pigeonholed

    That's exactly the sort of attitude that has caused the sort of spectactular failures of software projects to be accepted as the norm. Software Engineering is *not* "hacking" or "coding" or "programming", it's *engineering*, like building a bridge or a skyscraper.

    But software development, which the other poster was talking about, isn't necessarily software engineering.

    I've been titled "software engineer" (with appropriate prefixes) most of my salaried career, but when I made up my own title as an independent consultant, I went with "software craftsperson", because engineering, itself, isn't the major focus of the sort of software I am usually called upon to develop (operating systems, compilers, generally software-development toolchains).

    Of course, I try to improve the engineering-to-black-art ratio of software I work on compared to the "norm", because I believe the engineering approach, when usable, is superior.

    But actually calling myself an "engineer" seemed, and still seems, a case of calling myself by a title I respect while not being willing to insist on meeting the standards normally associated with that title.

    Generally, I find the industry -- including clients and customers -- prefer "good enough yet modestly expensive and time-consuming" to "well-engineered, way too expensive and never accomplished", which is what estimates produced in the early stages of a project tend to look like for "development/hacking/coding/programming" vs. "engineering" as respective approaches.

    And since most of my clients view the software I am to develop for them as merely one component in a large scheme of software, man-power, and so on, it really is up to them to best determine and evaluate their own optimization function and then decide how they want me to approach my work.

    Naturally, if I was asked to develop software that controlled life-or-death machinery, I'd demand a higher standard. But the real issue would be, would the client demand such a high standard that they wouldn't even consider me for the work, given my history of working on the sort of software that is widely known to be critically buggy despite decades of industry-wide experience developing it -- operating systems, compilers, text editors, assemblers, linkers, and similar utilities?

    Fortunately for me, the free market highly values someone like myself who can churn out productivity-enhancing tools (say, a 5% improvement optimizing a code generator), helping hundreds or even thousands of others make better use of their time and computing resources.

    So, whether I could actually engineer something like a FORTRAN 77 compiler for a specific '80s-era computer, I can't exactly say. I'd like to think I could. But nobody ever asks me for that. Instead, they ask for new features, better performance, debugging help, and the like, always involving software that has been (or will be) developed with only a modicum of "engineering" used.

    Within that context, my use of "engineering" boils down to using proven software-development and coding techniques in usually-small, specific instances -- in the nitty gritty details of a project -- such as avoiding situations where variations of the same original data are separately entered and maintained, yet not consistency-checked as part of a product validation process (such as during a build). That sort of thing is mainly a matter of saving me some embarrassment when I screw up, plus helping others who'll maintain the code down the road from making easy mistakes that end up being hard to track down.

    (And on most projects on which I work, I'm treated as if I'm "going overboard" by most of my fellow developers, who seem to believe that it's okay to spend hours debugging vast, intricate code only to discover the problem is a mere typo that a simple sanity-check could have found in a few milliseconds. Sigh.)

  • by argon405 (444012) on Monday November 05, 2001 @12:05PM (#2523038)
    >One of the conclusions is that script languages such as Python or Perl are about 2-3 times as fast to program with than Java or C/C++, at least in the small projects

    the biggest problem of all, is people with experience in small projects trying to apply that to large ones. Working on a million LOC program and a 20k LOC program simply can not be compared. We can speak of orders of magnitude difference, but unless you've worked on very large systems, you really don't have a clue.
    And of course, the vast majority of people who have time to write books and papers work on the small systems.
  • by nullset (39850) on Monday November 05, 2001 @12:24PM (#2523167)
    I got this from a friend, and it works perfectly.

    take how long you think it'll take, add one, and go up the next denomination of time.

    example: 3 days will take 4 weeks, 4 weeks will take 5 months, ......

    it's scary how accurate this is :)

    --buddy
  • by xsbellx (94649) on Monday November 05, 2001 @12:32PM (#2523213) Homepage
    This brings to mind the old quote "If builders built buildings the way programmers wrote programs, the first wood pecker that came along would destroy civilization".

    When looked at in the context of practical experience, this is quite false. We have been building buildings for at least several thousand years with some tremendous success and some spectacular failures. I live in Toronto where we were lucky (I think) enough to have the first major league baseball stadium with a retractable roof. IIRMC, the original cost estimates were in the vicinity $100 million (CND). When the stadium opened (pretty close to on time), the cost was actually around $480 million (CND).

    I guess this somewhat proves you can estimate either cost or time accurately but not always both. My experience in the IT industry has shown that most problems can be over come with enough resources. Unfortunately, resources are not limitless and therefore consessions must be made. This generally means the completion date slips or functionality is reduced or a combination of both.
  • by ciurana (2603) on Monday November 05, 2001 @01:02PM (#2523439) Homepage Journal

    A couple of posters asked this question above: How do we reconcile XP short develop/test cycles with a fixed project plan + bid?

    The answer is simple: During the planning and estimate parts we focus on defining the problem domain and a set of solutions for it. We don't focus on too many implementation details.

    XP techniques are applied to solving each specific problem found in the requirements. For example, the problem may be something like "how do we decode this math-intensive file the fastest?". There usually are two or more answers to such a problem. First we define an interface, then we try two parallel, different solutions and try both. The one that meets that criteria best wins, and we move on to the next problem.

    The thirst for features suffered by some people is often the result of poor design choices in the beginning of the project. If additional features are required, and the analysis was done correctly, you'll find that these new features simply extend solutions you were already working on (or solved). Thus, XP comes to the rescue again by letting you add the new feature without throwing the schedule out the window. Think about it: If a new feature forces someone to re-write a whole system then something must've been overlooked during the requirements analysis phase.

    The most important part of this process is not to start coding and testing until the business requirements are clearly defined. We've been guilty in the past of coding before understanding the problem completely; we try to avoid that trap now. That is probably the single most relevant cause of software project delays.

    Cheers!

    E
  • by remande (31154) <remande&bigfoot,com> on Monday November 05, 2001 @01:27PM (#2523621) Homepage
    My understanding of the paper is "Software estimation has been proven to be impossible by any formal systems."


    Now, this paper makes a hell of a lot more sense to anyone who's read Hofstadler's Godel, Escher, Bach, but I suspect that many, even most, Slashdotters have read this one.


    What makes the paper irrelevant is that we don't use formal systems to estimate software. We use our own head. We use hunches. We use intuition. These things are informal systems, capable of forms of reasoning that no formal system can achieve. That's what Godel proved.


    The paper is saying that you can't take a spec, give it to an estimator program, and have the program write the estimate. You can give the spec to humans who write estimates for parts of it, feed that into an estimator program (like a spreadsheet), and you can get an estimate, but you simply cannot remove the human from the loop.

  • by Kope (11702) on Monday November 05, 2001 @01:35PM (#2523663)
    The article presents an interesting arguement for why a completely new software project must have an arbitrarily large upper bound for time/quality estimates and can have no lower bound.

    But herein lies the rub -- exactly how many software systems are "completely new?"

    Damn few!!

    The average software project in an average industry will be primarily a repackaging of previously solved problems.The majority of integration tasks will be sufficiently similar to previous integration tasks as to be known.

    You will be left with a small number of "sub problems" which are unique and new. But now we have a situation where the caveats of the article are very important. Specifically, if we have decomposed the programming tasks to a sufficient degree, it should be the case that the estimation is tractable.

    Also, it should be noted, that the author assumes that a good estimate is one obtained through formal methods that is objectively defensible. However, in project maangement, a good estimate is defined as one that is believable and acceptable to all stakeholders in the process. The method for obtaining the estimate is not important.

    Moreover, good project management will include some significant up-front analysis. One common (at least common to companies with good PM'ing track records) is to run "monte-carlo" simulations of project work with large variances in schedule-v-actual work. With a run of a few thousand simulations, those processes that are most important to the time and budget performance of the project.

    These "key" work packages are often non-obvious without this type of simulation work. However, with a good work breakdown structure and a good simulator, it is possible to generate a reasonably accurate picture of project performance based on what is not known.

    This means that in the "real world" of business, the article's claim is irrelevant!!

    We don't NEED objectively defined and defensible estimates. Instead we need estimates that the project stakeholders (which includes the people doing the work) can agree to.

    We don't NEED our estimates to be generated by formal methodologies. Subjective estimates backed up by years of experience are just as good, and often better, from a planning perspective.

    This whole article strikes me as another programmer trying to show how dumb the business people are. Hey folks, good business people KNOW that estimating is hard and that it isn't objective. But just because something isn't objective doesn't mean it can't be done well. It is possible to build models that compensate for unknowns if you can do enough decompossing of the problem to limit the unknowns to a well defined, small manageable few.

    So, in the view of this PM, this is all just academic and has no bearing on the real world.
  • by frank_adrian314159 (469671) on Monday November 05, 2001 @01:45PM (#2523727) Homepage
    Whether you want to believe it or not, programmers are a highly optimistic bunch. This is especially true WRT any technological issue, where you almost never see actual analysis of possible problems with a system. Most of the time, this is a good thing, as most systems are relatively benign (actually, most are banal, but that's another issue) and developers need their optimism to face ever more complex code and systems. However it does make them tend to underestimate the time that development will take.

    Another reason that developers tend to underestimate development time is that they tend to have very healthy egos when it comes to technological issues. Again, when facing the complexity of modern code and systems, this is probably a healthy defense mechanism.

    But when you couple all of this with a management that wants to believe deflated time estimates, it's no wonder that most project end up taking more time than initially thought.

  • by KyleCordes (10679) on Monday November 05, 2001 @03:36PM (#2524397) Homepage
    [agile development process to fixed cost contracts]

    I work with the customer to divide the project up in to phases / steps / iterations / releases / whatever. Group the most vital core pieces together and do them first, at a fixed cost. As requirement change, these changes either go in to future fixed cost releases, or they are done hourly if requested. Thus, the overall project is not fixed, but at each stage the customer knows what they are buying at what price, and does not have the worry of the "meter running".

    There is some related explanation (not a sales pitch) about it on my web site:

    http://kylecordes.com/story-182-shared-risk-pricin g.html [kylecordes.com]
  • Whether you want to believe it or not, programmers are a highly optimistic bunch.
    I think that being optimistic is a good thing; it keeps most programmers from going out and getting other (less-stressful) jobs (my favorite one is to suggest I'll quit being a programmer in order to do something less stressful like driving a truck of unstable explosives) or going Postal. :)
    This is especially true WRT
    [with respect to] any technological issue, where you almost never see actual analysis of possible problems with a system. Most of the time, this is a good thing, as most systems are relatively benign (actually, most are banal, but that's another issue) and developers need their optimism to face ever more complex code and systems. However it does make them tend to underestimate the time that development will take.
    I have learned, myself. One thing I started to do - and I explained to my manager, who, thank goodness, used to be a programmer - that I am taking what I think things will take and doubling the estimate based on the fact that something ALWAYS goes wrong. There's always some snag part way through the work that causes it to slow to a crawl or come a cropper [grind to a halt]. Some piece takes longer, or the implementation I choose doesn't work, or factor X. [an otherwise unknown event or circumstance] This means that I have slack space in the other items to make up for the one that goes wrong.

    Carleton Sheets, a man who was talking about how to buy real estate on his instruction tapes said something useful which I decided I can use in estimating time requirements for various fixes:

    If what you are offering doesn't embarass you (in effect, if you don't feel like you're being greedy in offering too little to them, or you don't feel that your offer is so favorable to you that you are taking advantage of the other person) you're offering them too much.
    We need to learn to ask for the proper amount of resources and point out that less than the minimum makes it impossible to respond within the requirements no matter how much someone wants it to happen. (As Brooks points out, it doesn't matter how many women you throw at the task it still takes 9 months to produce a baby. Demand the baby be brought forth in less time and you either get a dead fetus (and possibly mother) or a sickly premature baby.)
    Another reason that developers tend to underestimate development time is that they tend to have very healthy egos when it comes to technological issues. Again, when facing the complexity of modern code and systems, this is probably a healthy defense mechanism.
    We need to learn that this is not a good idea because if you are consistently wrong on your estimates, eventually you get the "kid that cried wolf" syndrome: nobody believes you any more and all of the estimating systems become what everyone knows they are: a joke.
    But when you couple all of this with a management that wants to believe deflated time estimates, it's no wonder that most project end up taking more time than initially thought.
    It's actually no wonder "most" projects end up being cancelled. They take too long (because the people who are supposed to implement them were too aggressive in what they would deliver) and cost too much (because they routinely run overtime because the estimate was wrong in the first place).

    Paul Robinson <Postmaster@paul.washington.dc.us [mailto]>

  • by NoHandleBars (10204) on Monday November 05, 2001 @07:00PM (#2525298)
    With a little over 20 years experience of managing very large software projects for Fortune 500 companies I can identify the root cause for the spectacular successes and the colossal failures: Scope Creep.

    If the business requirements have been properly defined and management discipline exercised to keep within the original scope, every estimate I've developed -- using a variety of methods over the year -- has been successful. But those instances where the specs continually change, the business requirements are "discovered" along the way and/or new requirements are added to the mix are all failures. This has been true whether I've led teams doing something "no one's done before" or the "same old thing" again.

    Kudos to everyone here that has posted information on the REAL solutions in the form risk management, scope containment, good old fashioned discipline, and the like.

"In the face of entropy and nothingness, you kind of have to pretend it's not there if you want to keep writing good code." -- Karl Lehenbauer

Working...