Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Retrofitting XP-style Testing onto a Large Project? 49

Mr Pleonastic submits this query for your consideration: "I work for a small startup (ok, me and another guy comprise the entire development team) that has somehow managed to survive the bust, attract a number of customers, and build up about 300K lines of functionality. Up to now we've made it by being smart and conscientious hackers, but I'm increasingly embarrassed by our shortcomings in testing. I like the XP approach to making enduring, automated test suites, but most of what I read about XP focuses on obvious stuff and changing your programmer culture at the outset. Does anyone have experience with, or advice for, retrofitting it onto a fairly mature project? What do your test suites look like, anyway? The bugs I fear most are of the 'If the user does X and then Y, the result blows away our assumptions' variety, not the 'Oops! My function returned the wrong value' variety (which happens of course). How do you write good test code for the former, without spending even longer debugging the test code? Is XP just for small, new projects?"
This discussion has been archived. No new comments can be posted.

Retrofitting XP-style Testing onto a Large Project?

Comments Filter:
  • by Sklivvz ( 167003 ) * <`marco.cecconi' `at' `gmail.com'> on Friday September 05, 2003 @05:32PM (#6883249) Homepage Journal
    <wintroll>
    Win XP style testing? So, what's so hard? Release an alpha version and call it RC, and let users do the testing...
    </wintroll>

    • As a user of XP RC2, Microsoft's release canidates are extremely close to their actual product. From XP RC2 to XP 2600, I found no changes (except for the build number and "Evaulation Use Only" missing from the desktop).
      • As a user of XP RC2, Microsoft's release canidates are extremely close to their actual product. From XP RC2 to XP 2600, I found no changes (except for the build number and "Evaulation Use Only" missing from the desktop).

        You weren't using it very much is what you are saying, right?
  • by Dr. Bent ( 533421 ) <<ben> <at> <int.com>> on Friday September 05, 2003 @05:34PM (#6883275) Homepage
    Finding conditions that are outside your assumptions is not something you can do with a unit test. I have found that trying to simulate user creativity (stupidity?) with unit tests is an exersize in futility. Use your unit tests to make sure your methods do what they're supposed to do.

    To find all those tricky combinations of use cases that blow away all your assumptions, just stick to the Fail-Fast principle. If you find anything that goes even slightly wrong, complain. Loudly! Throw an exception, pop up a dialog, whatever you need to make sure that everyone knows an error just occured. This will do two things:

    1) You'll find a lot more errors in your code. You'll also be motivated to fix them quicker because the app will be unusable until you do.

    2) You'll reduce the likelihood of generating bad data. The only thing worse than your program doing something wrong and crashing is doing something wrong and NOT crashing. Users will usually forgive you if your software crashes. If you start giving them bad data, they'll lose confidence in your app and never trust it again.

    • by Anonymous Coward
      Basically, go to every function, method, block of code and write a piece at the top to check all inputs. At the bottom, verify outputs. If you access a database, avoid autocommit like the plague.
      If ever function is sure it has valid data coming in and valid data going out, nothing the customer can do should be able to break it. Ideally for every function the caller should check the data before it is passed, the callee should verify it is what it expects and verify the data before it is returned and final
    • I can attest to the fail fast principle. I've worked in projects where the lead designer mandated "this system must stay up and running even when it loses its marbles (ie memory corruption detected, etc)" and that philosophy only leads to chaos and grief. Much better were systems that utilized runtime assertions - even in production - on the correctness of data passed into functions. Instead of being more crashprone (which I had expected) those code bases quickly became titanium plated. Nothing could br
  • by chromatic ( 9471 ) on Friday September 05, 2003 @05:43PM (#6883362) Homepage

    This is hard to answer in a short comment. I'll try, though you're welcome to contact me for more details through the information on my website.

    Retrofitting tests onto an existing project is hard. Not only is it tedious, time-consuming work, but you're always haunted by specters that ask "How do you know the test isn't broken?" It's nice to have the tests, but you'll spend a lot of time and energy creating them that could be better spent adding new features and improving existing features. Besides that, it'll likely sap any motivation you might have had for testing.

    It's much easier to draw a line in the sand and say "all new features and bugfixes will have tests, starting now". Before you fix a bug, write a test that explores the bug. It must fail. Fix the bug. The test must now pass. Before you add a new feature, write a customer test that can demonstrate the correct implementation of the feature. It must fail. Add the feature. The test must now pass. From the programmer level, you can write programmer tests through the standard test-driven development style.

    It still can be tricky to get started, especially with customer tests, but they don't have to be beautiful, clever, or comprehensive. They just have to test the one feature you're working on sufficiently to give you confidence that you can detect whether or not it works. You'll likely have better ideas as you gain tests and experience and it's okay to revisit the test suite later on to make it easier to use and to understand.

    The nice part about this system is that it adds tests where you need them where the code is changing, whether it's a part full of bugs or a part under continual development.

    Keep in mind that to do testing this way, you need to be able to work in short, clearly-defined, and frequently-integrated steps (story and task cards, in XP terms). You also need the freedom to change necessary sections of the code (collective code ownership). It helps to have a good set of testing tools, so, depending on your language, there's probably an xUnit framework with your name on it. Also, it can be counterproductive to express your development and testing time estimates separately. At first, testing well will slow you down. It's tempting to throw it out altogether as a time sink. As you learn and your test suite grows, however, the investment will pay off immensely.

    Your goal is difficult but doable. It's well worth your time.

    • by Anonymous Coward
      Hold on dude! You are not a programmer, so you shouldn't give advice on programming.

      Your resume clearly indicates that you are just a Perl hacker. As everyone knows, Perl is not a real programming language.

      Come back when you know a real programming language.

      Thank you and good night.
  • by fluor2 ( 242824 ) on Friday September 05, 2003 @05:57PM (#6883490)
    I would like to introduce my own method: The CCCC test method (Clicky clikcky clicka click).

    1. Open the application
    2. Click at an totally unexpected object
    3. Fill in some text somewhere (if expected)
    4. Goto 2.

    I find most warnings and bugs here, as long as you have some good assert() in your code. It's best if you use the CCCC method really fast, and test for like 4 minutes every time.
  • by daniel_yokomiso ( 641714 ) on Friday September 05, 2003 @06:28PM (#6883772) Journal
    Not what it doesn't.
    As a start write down test specs for all of your use cases, even if the specs aren't automatizable. Then find a tool to simulate the user (e.g. HttpUnit is very good to simulate web users) and try to turn the specs into functional tests. Ensure that your application works today.
    The next step is remodularizing your code, try to find a tool that traces dependency diagrams in your code (e.g. Compuware's Pasta is an excelent free tool for Java). Module interdependency is a strong smell of bugs, so refactor them to make the dependencies acyclic, running your tests to keep everything under control.
    Then try to write unit-tests for the modules, create mock objects from them and check if they do what they're supposed to do. Repeat the second step for your classes (or data structures and functions if it isn't OO). Try to make all dependencies acyclic and create unittests for them.
    And during all these steps use Design by Contract to write down *all your assumptions*. Leave them on production code too (unless it's strictly necessary for performance). That way if your code breaks your assumptions the contracts will tell you. Also it'll force you to rewrite some code to make it checkable (i.e. exposing invariant predicates).
    Finally don't forget to check the common XP areas (extreme programming in Yahoo! Groups and news://comp.software.extreme-programming).
    It'll be no piece of cake but when you start to see a better factored code that keeps the bar green you'll be rewarded.
  • by Anonymous Coward on Friday September 05, 2003 @06:31PM (#6883803)
    I have a similar situation, I have a bunch of code that "mostly works" and I'd love to have unit and acceptance tests.

    But it's really hard to add it later. I mean REALLY HARD. The tests are tedious and boring and after 2-3 I get tired and the tests have errors.

    If you follow the XP test-first technique, you code comes out MUCH different. You have low coupling, you have "testable" code where the pieces are interchangeable (so you can easily use mock printer objects or non-RDBMS backends, etc), and generally it's really elegant code with little extra work.

    And you don't get bored writing test-first because every time you write a test, you then write the code that passes the test and it's really a feeling of accomplishment! And you don't get "lost in the big picture" because you are focusing only on passing that one little test.

    The same is true for acceptance tests. I use HttpUnit to automate web apps, and although I'm not quite as religious about testing the interface, it's great for "add record, query record, delete record" stuff, to make sure it doesn't blow up when the end-user does something basic. For instance I had some code once that worked wonderfully, except login was broken. Since I was testing while logged in and never thought to log out and log back in, I never caught it in my manual tests. Automated tests can catch the stuff you forget.

    So I'd recommend requiring tests on all NEW code (you'll see a big difference between the old and new code I bet, in terms of simplicity and low coupling).

    And whenever you refactor the old code, start by writing tests that the old code passes.

    But it will really be tough to retrofit ALL your old code with tests. I'd even say it's not worth it because your tests will not be good.

    And remember: EVERY LINE OF CODE MUST EXIST TO PASS A TEST. That should be your goal on new code.
  • If it is Java... (Score:3, Informative)

    by Not_Wiggins ( 686627 ) on Friday September 05, 2003 @06:33PM (#6883827) Journal

    One word: JUnit [junit.org]
  • Retrofitting (Score:5, Informative)

    by jdybnis ( 4141 ) * on Friday September 05, 2003 @06:34PM (#6883832)
    This is something I've wrestled with too. Start where you'll get the most bang for your buck. Start with regression tests. I assume you're doing *some* testing (or at least your users are ;). When a problem shows up, make an automated regression test that surfaces that bug. Run it often and make sure the bug stays fixed.

    With a 300KLOC codebase I have to ask is it boken down into components that can be tested in isolation. If it is, congradulations you've done some good software architecture. You can start by testing the interfaces to the components. Make a test that triggers each error condition from each interface function/object. The tests will seem braindead simple (like passing in a NULL when a valid pointer is expected), but these sort of tests are suprisingly useful. Infrequently exercised error checking is one of the easiest things to let slip through the cracks when modifing an implementation. That will be enough to get your test framwork set up, and shake out all the forgotten dependencies between your components. Then it will be straightforward to add more testing.

    It won't be easy. You should expect you'll have to modify your code to make it testable. But if you expect to keep this code around for a while, it will pay off in the long run.
  • Step by step (Score:5, Interesting)

    by pong ( 18266 ) on Friday September 05, 2003 @06:53PM (#6883969) Homepage
    1) *Everytime* you discover a bug from now on, write a test case that exposes it. Then fix it.

    2) Write new functionality test first. You are not allowed to implement new features unless you first implement a test that fails. Once in runs you are either done, or you got ahead of yourself and need to get back to writing a few more tests :-)

    • You need to consider your development process like a chess game. You cannot go back and change the moves that don't make sense anymore. Instead, consider where you are now and move forward.

      Instead of trying to go back and write test cases for everything, do as pong suggested and move forward from where you are today. Since the code is already working there is no real need to write test cases for it. Doing so would be contrary to the philosphy of XP that you only do what is necessary to complete the

    • Disclaimer:

      I'm not a XP specialist, just yet another of those corporate guy that managed to introduce some XP practices (ie _automated_ testing) at the class level and at the component level.

      Yes, adding a test for each bugfix, that's good for new code/patches. And should be absolutely followed.

      However, I'm under the impression our good friend Mr Pleonastic is after some way to apply testing (whatever the scope of it) to legacy code.
      Our experience there is that, even for code recently written, trying to

  • Fuzzers (Score:3, Informative)

    by itwerx ( 165526 ) on Friday September 05, 2003 @07:37PM (#6884270) Homepage
    Get what the QA people call a "fuzzer".
    There's two general types (often bundled together with a few other things as a test suite).
    The first generates random keystrokes and the second generates random data either completely randomly or following some set of guidelines (field length etc.)
    It still won't do everything that exposure to the real world will, but it'll get you a lot closer!
  • redirect (Score:3, Informative)

    by bons ( 119581 ) on Friday September 05, 2003 @07:57PM (#6884376) Homepage Journal
    You're more likely to get people who know XP and TDD on http://groups.yahoo.com/group/extremeprogramming than you are on Slashdot, simply because the yahoo group is focused on that very topic.
  • XP Recommends (Score:3, Informative)

    by Anonymous Coward on Friday September 05, 2003 @09:04PM (#6884727)
    What several posters have already suggested.

    Don't write tests for existing code just because it's there. Write tests for any code you have to change, and then do the change test-first.

    Initially, it's going to be hard because legacy code is usually highly coupled. If you pay attention to reducing coupling, over time your code base will start to improve.

    And get one of the xUnit clones for unit tests, and FIT (fit.c2.com) for acceptance tests.

    John Roth
  • If there's something that's particularly hairy in your existing codebase, the next time you need to modify it, spend just a little time refactoring it first so you can make your modification more easily. You don't need to do a six-month rewrite, just a little bit of refactoring to remove local duplication and confusion. And remember this about refactoring: It's not just about improving the design of existing code by making careful transformations of it, it's about rigorous removal of duplication.

    Write un
  • Here are some of my observations having applied many forms of testing.

    One of the chief goals of XP is that units can, and should, be redesigned whenever any programmer identifies a unit that is poorly designed (contrast this to RUP, which is far less ad-hoc but still relies upon unit testing). This requires that your code be designed as components, and that each component have a well-defined contract.

    Based upon the brief bit you mentioned in your story, it sounds like integration testing is more applicab
  • > 300,000 lines of functionality

    If you have this much code, I bet there's some duplicated code in there. Ferret it out with CPD [sourceforge.net] and you'll have that much less code to write tests for.

    It probably wouldn't hurt to search for unused code [sourceforge.net] while you're at it - again, you'll reduce the amount of code you need to write tests for.
    • >If you have this much code, I bet there's some duplicated code in there. This is the second time in the thread (which is very informative, thank you everyone, I love you all, mwa) that someone has said that if I have 300K lines of code (it's Java), there's probably something wrong. This is a non-sequitur to me, and reinforces something I've wondered about for a long time-- do people just write tiny programs out there?? It reminds me of the time I was working on a different project using C++ Builder--
      • > if I have 300K lines of code (it's Java),
        > there's probably something wrong

        Sorry, didn't mean to imply that. What I meant was that 300K LOC is such a large amount of code that probably some duplication will have crept in there despite your best efforts.

        I've worked on a system that had about 450K LOC and there was a ton of unused code/duplicated code in there - mostly because folks were afraid to clean it up due to a lack of unit tests.

        > I don't think 300 classes is a lot

        I certainly agree.
  • If you spend more time debugging test code than debugging your functionallity ... then your API/abstraction of the desired functionallity is wrong.

    For further development try this:
    Use Use Cases in textual form, probably "essential use cases", you can googel for that, is the easyst way in case you are not familiar with use cases so far.

    Further use "scenarios" for defining certain things which can be done from the outside with your application. A scenario is corresponding to a use case in that way that it o
  • As others have noted, retrofitting some sort of automated testing would be very hard, but can be well worth it in the long run, maybe even this late in a development.

    The key ingredients of successful automated test suites I've seen are...

    • Some sort of macro language that:
      • describes the inputs your program accepts (from a UI or otherwise)
      • is recordable by your program
      • is replayable by your program;
    • Some sort of state record that can be:
      • generated at any given point in your program by a
  • In addition to the other good comments already posted (especially those regarding refactoring), let me suggest making increased testability part of the ongoing maintenance process. Specifically, every time a problem is reported, make the creation of a test that exposes the error the first order of business. (Of course, make it as fine-grained as possible, and refactor as necessary to achieve that focus.)
    Then use that test to guide the resolution of the problem. Iterate throughout the life of the project.
  • by Chacham ( 981 ) *
    Write a test document first.

    The biggest problem with tersting tis that people test what the code was programmed to do. However, testing should be based on the requirements, to make sure that it does what it was intended for. Also, if the test document is written before coding, it means people won't cut corners when writing that document (due to optimism) and it may help drive coders to code the really important parts first.
  • Blackbox testing (Score:2, Informative)

    by gtshafted ( 580114 )
    Try Jakarta's JMeter [apache.org].

    Instead of having to write tests for HttpUnit, you can simply record tests with JMeter. You also have the added benefit of JMeter's load testing capabilities. The only downside is that JMeter's UI isn't very mature (or intuitive for that mature).

  • One thing unit testing is often spectacularly good at is pointing out where assumptions have been made and not spelled out. This often takes the form of "negative testing". (What happens if you add a NULL to that list? What happens if you try to access the -1th element in an array? What happens if you neglect to set an IP address?)

    What you'll likely find is that, in a number of spots, the refusal or assertion is not spelled out. Occasionally, it can take some mulling to figure out how to deal with those

  • by SKarg ( 517167 ) on Sunday September 07, 2003 @07:14AM (#6892178) Homepage

    I ran across an article a couple of years ago by Chuck Allison [freshsources.com] in C/C++ Users Journal about the The Simplest Automated Unit Test Framework That Could Possibly Work [cuj.com]. It included test frameworks written in C, C++,and Java [192.215.71.136] and opened my eyes to doing best practices to the extreme. It also showed me how I could apply unit testing to my C code. You can download free Test Frameworks [xprogramming.com] (Test Suites) for other languages.

    Unit testing was the first XP key practice that I started to use. When I would have to make a change in my mature code, I would add a unit test section to the module I was changing (using #define TEST), and add a main() to execute the unit test (using #define TEST_MODULENAME). See examples of this on my software page [kargs.net]. I then began using test-first programming by writing the unit test first, seeing it fail, then writing just enough code to make it pass.

    Other extreme programming sites that have been useful have been extremeprogramming.org [extremeprogramming.org] , which has a great tutorial that includes an introduction and overview, and the site Extreme Programming [xprogramming.com].

  • Well, from the short description you gave, it seems to me you have some architectural/design problems.

    I would recomend some reading: first Refactoring [amazon.com] which talks about code "smells".

    One such smell is "Inappropriate Inimacy"

    [classes]spend far too much time delving in each others' private parts

    recipes are given to resolve these kind of problems. The nice thing about it, is that it talks of symptoms of problematic code, you will more easily understand where your problems occur and why.

    Next, read a good b

    • Talking about books, I recently bought Test Driven Development [amazon.com] by Kent Beck (XP proponent), and have not regretted it. If you need some help on learning to write unit tests, it has some really basic examples to help get you started, and is written in a very down-to-earth style.

      Other than that, I would add that writing tests is HARD, at least at first. It takes some getting used to, and you will look back on your first attempts and laugh/cry some day.

      The best thing to do is to stop talking, and start

  • ...Such as this [newtechusa.com]
  • Like a previous post, I'd suggest "evolving" to testability - every time you fix a bug or add a feature, do it test first.
    You will have to spend some time setting up the testing framework - a structure for your unit tests, the "non-code" stuff, and a way of finding out asap that you've broken a test.
    Depending on your environment, you could use something like AntHill [urbancode.com] or CruiseControl [sourceforge.net] to automatically run all your unit tests as part of a (timed) build process, and email the results. CruiseControl also allows

Scientists will study your brain to learn more about your distant cousin, Man.

Working...