Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Software Accountability Made Real? 49

An Anonymous Reader writes "In a recent presentation and post, Kent Beck (eXtreme Programming, Embrace Change) highlights Open Quality Dashboards as a means to make software development accountable. Many different approaches attempt to reduce the number of issues creeping in all along the development process. Whether a shop abides by the rules of up-front UML design or test-driven development, or a methodology somewhere in between, the ongoing burst of popularity for tools enabling continuous integration and frequent releases shows the need for unit testing to appear earlier in the development process. In this context, quality dashboards could well establish a credible benchmark for software accountability."
This discussion has been archived. No new comments can be posted.

Software Accountability Made Real?

Comments Filter:
  • by Dr. Bent ( 533421 ) <<ben> <at> <int.com>> on Thursday February 24, 2005 @12:40PM (#11767364) Homepage
    At my company, most of our products are built daily (at a minimum) and the metrics are published to an internal website. Things like ugly code, unit test failures, bad JavaDoc, poor test coverage, and findbugs problems are visible to everyone in the company.

    This makes it a lot easier for developers to do the right thing (and fix these problems). Nothing like a big red bar to motivate you!

  • Wha? (Score:4, Insightful)

    by superpulpsicle ( 533373 ) on Thursday February 24, 2005 @12:46PM (#11767420)
    Software development accountable what?! How about making Management accountable. The developers are not the ones screaming for new features every release with these ridiculous time lines. The decision makers are accountable. Aka executives.

    • Re:Wha? (Score:5, Interesting)

      by neiras ( 723124 ) on Thursday February 24, 2005 @01:05PM (#11767658)
      That's EXACTLY it. At our shop, we got sick of being blamed for "taking too long on projects" - so we got together, got up to speed on Personal Software Process and Team Software Process, and started a development lifecycle and process improvement team.

      There are a number of interesting benefits to this. The best one so far is that we maintain a 'responsibility trace' right from individual stakeholders in Management, to each requirement, to each design element... we can actually tell who in management has a stake in a particuliar _block of code_.

      The other neat thing is, the execs can make changes all they want. We really don't care. Because we're on a fixed 3-week development cycle (all the way through the cycle each 3 weeks, culminating in a release) we can either say "sure, we'll do that in the next build" or "scratch the current cycle and we'll do that now". In the latter case, we only lose a maximum of 3 weeks work. Not bad at all, and if management complains, well, we can show them WHY we lost 3 weeks. They shut up pretty quick.

      Unfortunately, convincing management that the paperwork we end up doing to improve and maintain our process is a Good Thing, is difficult. If we aren't coding, we must not be working, right? Wrong. Now we have nice graphs showing number of defects in our software falling through the floor, time spent fixing defects falling through the floor, developer productivity skyrocketing... It's fantastic.

      Bottom line: Management in some places doesn't WANT responsibility. They want to hand down directives from above, and we are the magical little gnomes who make their projects at 1/4 their salary, if we're lucky. If they go sour on a gnome for whatever reason, they want to be able to fire with impunity. Process is the way to make them eat their own crap whether they like it or not. They WILL end up liking it, and you get your life back.

      • Re:Wha? (Score:2, Funny)

        by chris_mahan ( 256577 )
        Please tell me your company name, so I make sure NEVER to apply there.

        Movie trivia. Which movie is this from?
        "Attitude reflects leadership".

        • Re:Wha? (Score:3, Informative)

          His scenario reflects 2/3 of the places that I've worked.

          However, his solution might work well in many places where feature-creep happens, even when there isn't as much animosity between developers and management.
          • Re:Wha? (Score:3, Insightful)

            by chris_mahan ( 256577 )
            Yeah, I know, and same here. I'm just picky of my work environment.

            I think his solution stinks to high heavens.

            You know what writers say? "The story needs to be as long as it takes to tell the story well."

            I say that trying to make all software development fit the 3-week cycle, is akin to making all software development fit the J2EE Way. I need more flexibility.
            • Re:Wha? (Score:3, Informative)

              Yeah, you can't force all development into a 3-week cycle, but it can work pretty well for some projects where pieces of the project can be postponed until the next development cycle.

              A 3 week cycle could work pretty well in a web-environment (which is what I work in).
              • Yeah, and that's why I say it stinks: That their management is so obtuse that the devs felt like they had to create a layer of politics to insulate themselves from irresponsible slave-drivers.

                The worse is that the devs think it's normal.

                Bitter? Nah...
            • Re:Wha? (Score:3, Insightful)

              by neiras ( 723124 )
              We evolved a lifecycle that fit our company and our needs. You're free to do the same, and that's what I would recommend to anyone who is serious about writing software.

              Just don't discount the value of _having_ a lifecycle. I never said there was One True Way, I just presented our shop's journey as an example.

              If it sounds like we're in chains, you're dead wrong. We've never had more freedom. The hard part is deciding you're actually willing to do what is necessary to get it when you work in an environment
        • Remember the Titans?
        • "Attitude reflects leadership"

          "Remember the Titans".

          I'd write it as "Attitude reflect leadership", since that's the way it was said, and because I like the combination of bad grammar (or poor enunciation) and sharp insight. Julius Campbell (Wood Harris), a star black player on the team, said it to Gerry Bertier (Ryan Hurst), the white team captain, when Gerry complained of Julius' bad attitude.

          Yes, my kids have watched it so many times I can recite most of the dialog. Good movie, though.

    • Re:Wha? (Score:2, Insightful)

      by mutterc ( 828335 )
      I think this is a market failure, and so clueful management can't help you. From what I've seen, the market will simply not bear the cost of software done right.

      Nobody will pay extra for quality software (I'm talking about business customers; individuals don't have any kind of realistic influence on software development). If you (as a developer) tell a customer they'll have to pay $X and wait Y months for the software they want, they'll just buy it from someone else who promises it at $0.5X and 0.6Y mont

    • As a professional you have a responsibility to the company you work for to make your development process as reliable as possible. If "screaming for new features every release" is throwing your development into chaos, then you don't have a good system. If you can't display a measure of the impact on the codebase of those new features, you have no justification for the any delays caused by them. If you can't show daily measurement of the maturity of the code, how can anybody make a decision about schedules, r
      • If you can't show daily measurement of the maturity of the code, how can anybody make a decision about schedules, resources, or when to ship? It'll all be just wild ass guesswork, and you'll suffer as a result.

        You can have all sort of measurements, but the question of whether they are accurate or meaningful is another question...

    • "How about making Management accountable."

      Indeed.

      "High risk change. Requested at 6.30pm one day, needs to be available by the following morning."

      Hmmm, wonder why there's no high-quality code on that project...?
  • by G4from128k ( 686170 ) on Thursday February 24, 2005 @12:47PM (#11767429)
    The Wallstreet Journal has a page B1 article (free via this link?) [wsj.com] on buyers trying to hold software providers liable for flaws, damages, bugs, etc. It seems the old EULA disclaimer is not going to hack it anymore. Buyers argue that each software patch is equivalent to a product recall and that vendors should help pay for the cost of patches (AT&T says it sends $1 million per month on patching).

    If General Motors can be held liable for damages caused by a defective car part, some argue that software makers should be held liable for damages arising from buggy code.
    • OK, riight. Thing is, if the code crashes, whose fault is it?

      This happened today: customer calls claiming my software is broken -- he's getting "invalid opcode" messages trying to run the thing. Well, noone else gets the same message. Turns out he's running the software on some crap 386. So, is it my software, or: a) an old Windows95 so loaded up with spyware, viruses, registry crap from software uninstalled 5 years ago, outdated network clients, etc. that it takes 5 minutes to boot, or b) bad hardware (me
      • That would be, I believe, the point of a court of law.
        • And how is a court of law to determine this? Google "junk science"+lawsuit to see how guillable courts are when it comes to culpability.

          • ...and the better solution is? Yes, the legal system may suck, but that scenario described in the ancestor post *is* was courts are meant to adjudicate.
            • Not intendending to be sarcastic -- I'm just curious: Software *has* to be different than cars somehow, right? Otherwise, why haven't people been suing MS et al. into oblivion over the last 30 years, as has been done to auto and (especially) small plane manufacturers? In my mind, you are a little tenuous in asserting that software disputes are the domain of the courts.
              • That's a good question. I imagine part of it has to do with MS software not directly killing anyone. I still don't know where else failure-to-perform-as-advertised or breach-of-contract or whatever disputes would be settled, *besides* a court. Or arbitration, which amounts to about the same thing.
    • Buyers argue that each software patch is equivalent to a product recall and that vendors should help pay for the cost of patches

      Ok, fine. I'll just gouge my customers up front rather than sticking it to them later by not reimbursing them for patching their systems.
      Software shops can then sell protection plans along with the product that guarantees a payout in the event of patching.

      Do you wanna pay now or later?

      • Ok, fine. I'll just gouge my customers up front rather than sticking it to them later by not reimbursing them for patching their systems. Software shops can then sell protection plans along with the product that guarantees a payout in the event of patching.

        Do you wanna pay now or later?


        Absolutely true! But if a competing software company actually creates quality code, then it won't have to gouge its customers, won't have many pay-outs for defective/patched software, and won't have to sell a protectio
  • by FullMetalAlchemist ( 811118 ) on Thursday February 24, 2005 @12:52PM (#11767490)
    The most important thing is common sense; depending on your team, different methodology is needed.

    The most important aspect of development for my team today is requirements reuse, sound silly but works great. By following this simple methodology we have made errors nonexistant; it beats unit testing by a mile in efficiency, plus it matches the results.

    Most other teams fail with this approach though, and hard. It simply comes down to what the team is made of, mine love it.
    • by Anonymous Coward
      A couple of thoughts:

      1) As someone said, "Common sense is not that common" :-).

      2) IMO unit testing IS common sense. As a matter of fact I can't think of a more sensible thing to do that to test units of code before integration. It's very logical approach that is finally getting the recognition and importance it deserves.
  • If you want to implement one of these, check out Dart [kitware.com] from the guys at Kitware.

    I've seen their in-house dashboards and they're quite impressive. These guys eat their own cooking.
  • by The Slashdolt ( 518657 ) on Thursday February 24, 2005 @01:17PM (#11767777) Homepage
    Managers tend to think that gathering proper input, leading to proper requirements, is "hard". But doing this upfront work is required to properly analyze/design/estimate a programming effort. Along comes XP/Agile whatever you want to call it. They say, you don't need everything up front, you can change things as we go, we're "agile". This is what managers want. Every month along the project the requirements change, the design changes, we adapt, this is great. The part they keep leaving out is the fact that change is not any cheaper. With any method you pick, as everyone knows, the later in the project you make changes the more they cost. They always leave off that part.

    I can't recall where, but I remember reading the quote somewhere, "you can't refactor an elephant into a cheetah". I don't think many managers truly understand that...

    To me XP/Agile is just an excuse that allows marketing and management to not have to do their job.
    • With any method you pick, as everyone knows, the later in the project you make changes the more they cost. They always leave off that part.

      The fundamental premise of XP is that there are ways to reduce those costs dramatically. A secondary premise is that exposing the costs of those changes at the point of change gives stakeholders more information to evaluate the necessity of those changes.

      • by The Slashdolt ( 518657 ) on Thursday February 24, 2005 @04:00PM (#11769596) Homepage
        Your comment is doing exactly what I complained about above...

        ...there are ways to reduce those costs dramatically.

        I don't doubt there are ways to reduce those costs dramatically. But reducing those costs increases costs in other places. Change is not free. You aren't reducing overall costs, you're just moving them around. You can simplify your design, make it completely decoupled and resilient to change. But, again, this is not free. These decisions have costs. Especially in terms of technologies, performance, etc, etc. XP offloads up front work onto developers later in the project. They don't tell you that the entire project will cost more and take longer, they leave that part off. Management only see, "As requirements change, your software changes". Your comment is an example of how XP fools management into thinking that reducing costs in one area do not impact costs in other areas.

      • The fundamental premise of XP is that there are ways to reduce those costs dramatically.

        Yes, as long as you keep in mind that (a) those costs are still much greater than zero and (b) those costs are also much greater than the cost of doing the up-front analysis.

        Designing and implementing for change is a good thing, but it's still more cost-effective to get it right the first time wherever possible.

        IMO, some of the ideas in XP are valuable for every development approach, but they don't change the fact

        • ...those costs are also much greater than the cost of doing the up-front analysis.

          Only as far as the business requirements do not change after the analysis or the initial analysis anticipates changes in business requirements accurately.

          That's possible, but I've never seen it happen.

          Incremental requirements analysis coupled tightly to incremental development leads to hodgepodge systems unless the developers are really given free rein to refactor on a whim.

          Indeed; that's exactly what XP suggests!

          • Only as far as the business requirements do not change after the analysis or the initial analysis anticipates changes in business requirements accurately.

            I think you're talking about two things here. The first is the reduction of scope of a large project. Where we know we'll have to do X, but we'll only do x for now. We'll incorporate into our desing the knowledge that we will eventually have to do X. The second is anticipation or prediction of related requirements. As in, I am doing an email client, of
            • One of the cornerstones of XP is to have a customer onsite that knows what they want. You won't find out everything you need to know if you only gather requirements up front; having the expert always available to you allows you to fill in these gaps. And since you have the customer onsite, it's sensible to do less of the upfront work. You just get right into implementing the most important feature.

              Whether or not you'll be able to get an expert is another question. But if you read Agile Project Manageme
            • Ah, I see our differences now. My experience is in building software for specific clients. Yours sounds closer to off-the-shelf software.

              I suspect that having greater knowledge of what your target market wants and needs will be helpful, which is similar to what you said earlier. You're right; if you don't have a clear customer capable of identifying and prioritizing actual business needs for the software, XP's planning features won't work very well.

          • Only as far as the business requirements do not change after the analysis or the initial analysis anticipates changes in business requirements accurately. That's possible, but I've never seen it happen.

            Me neither, not 100%. But my point is that doing as much as you can up front to really understand the problem pays off, and the payoff is greater than linear.

            If you can get the requirements half right before you start, your project will be much shorter and much cheaper than if you just start building

        • it's still more cost-effective to get it right the first time wherever possible.

          Even if that were true, it's never possible; not in the real world. Even in the pretend world, the effort needed to get the theoretical correct up front and for all time requirements is prohibitive. In the same theoretical play world where getting it right the first time is possible, you'd still find that the company would be bankrupt and the business problem to be solved would be moot before you were done.

          • Even if that were true, it's never possible; not in the real world.

            Of course not. Not 100%. My point was that you're money ahead by getting as much as you can up front.

            In my case, the software development I do is on a contract basis, often for firm fixed price. So I'm very accustomed to having to get the requirements very close to complete and correct up front before planning the development effort (and including some padding for the inevitable changes!).

            • Re: Padding for the inevitable changes.

              That comment makes it abundantly clear that you know that it's never right up front. Why pad then? That's either a) cheating the customer, when your estimate turns out high or b) a gigantic risk of your company's profits if you guess low. Do you think the customer doesn't know you are padding? Of course they do -- negotiations up front on scope/price/schedule then end up being a struggle to push the padding one way or another. XP and similar agile methods want you
              • Customers don't hire my company to get the lowest possible price, they hire us to get a guaranteed-to-work solution, and sometimes to get a guaranteed price. They know that we have the resources to do the job (whatever it may take) and when they sign a firm fixed-price contract they know that we've added plenty of contingency to cover the risks, and that we intend to walk away with a very healthy profit, and hope to have an insanely high profit, but that's okay because they've run their numbers and determi

    • You're forgetting that the main driver of change is the customer/user. And, trust me, they will find every opportunity to squeeze stuff out of you. Designing everything up front will only make you obstinate about every little thing when the customer comes to ask for something. If you're obstinate or keep telling them it's a PCN (project change notification) or CR requiring lots more money, there are plenty of other vendors to step in and do otherwise.

      XP/Agile is a way to capture this process. You don't hav
    • by Anonymous Coward
      The point with XP is so that you can quantify the cost of the changes - because mgmt is going to make changes anyway. The big XP practice that points this out is the Planning Game, where you get to ask mgmt "If you add this, you need to take out X tasks - which ones can you give up?"
  • by 4of12 ( 97621 ) on Thursday February 24, 2005 @01:29PM (#11767926) Homepage Journal

    These are all good ideas, the unit testing, the automated frequent testing, etc.

    Having experience a few crashes of bleeding edge versions of evolution and firefox with the automated calling back to the developers about the crash symptoms got me to thinking that having actual use (and abuse) be automatically incorporated into test suites might really abet the development of less crash prone code.

    Despite the capability of automated testing to test many more features than can be done by hand, new applications have so much context and so many options that we need to test for what the users are actually doing with the application. Not just what we think they're doing, what we hope they're doing, but what they're really doing.

    The most important bugs would be the ones that happen to the greatest number of people the most times.

    Harvesting application interactions and sending them back to the test suite has a lot of value, but it's up to the developers to do this in ways that are sensitive to the user's need for privacy, too.

  • This whole story looks like a thinly veiled ad for a startup. Their main product isn't the overall reporting of test coverage and other metrics, their main product is an automated test case generation tool (one that seems to generate oodles of data, just no anything that looks very useful).

    Should you collect statistics on your project, bugs, test coverage, and all that? By all means. And there are lots of tools to do that, free and commercial.

Old programmers never die, they just hit account block limit.

Working...