Writing Unit Tests for Existing Code? 86
out-of-order asks: "I recently became a member of a large software organization which has placed me in the role of preparing the Unit Test effort for a component of software. Problem is that everything that I've read about Unit Testing pertains to 'test-driven' design, writing test cases first, etc. What if the opposite situation is true? This organization was writing code before I walked in the door and now I need to test it. What methodology is there for writing test cases for code that already exists?"
Re:Treat it like the code isn't written (Score:2)
<Maniacal Laughter>
Unit test the bugs you need to fix (Score:2, Insightful)
Re:Unit test the bugs you need to fix (Score:2, Informative)
Re:Unit test the bugs you need to fix (Score:3, Insightful)
I also find that once I get going, I end up throwing away or heavily refactoring a lot of legacy code anyway. So if I had written tests, I'd be throwing them out, too.
If you are heavily refactoring, it's probably worth putting in the effort to write the tests beforehand. Otherwise how can you be confident that your refactoring hasn't broken anything?
Re:Unit test the bugs you need to fix (Score:1)
And sometimes the dependencies and interfaces are so bad, you're better off building system/integration test coverage to support your refactorings.
Re:Unit test the bugs you need to fix (Score:2)
If your unit tests are testing observable behaviour as they should be, and your refactoring doesn't change observable behaviour at all as it shouldn't, then you don't need to throw away the tests. Similarly, if you're throwing away old code then presumably that's because you have new code that produces the same required behaviour, and th
Re:Unit test the bugs you need to fix (Score:1)
However, my practical experience is that it's time-consuming to build unit tests for bad legacy code. So I tend to invest my time in broader-scope system/integration/functional tests for existing code.
I've also found that bad code tends to have many redundancies, unnecessary interfaces, "we'll need it someday" features, and "wouldn't it be cool
code coverage profiling (Score:2)
Also require all new code to have matching tests and setup automatic tests to slap developers who add code that doesn't get tested.
Good luck.
Test First is not a Test Strategy (Score:3, Interesting)
I don't see TestFirst as a Test Strategy, but as a design technique. Writing Tests first forces you to think differently about what you want to write.
This forces you to write testable code - writing tests afterwards does not force you to do that.
Of course, having the tests available later proves valuable for testing your application, but the tests main purpose is to lead you to a testable design
You'll most likely experience severe difficulties in adding Unit Tests to previously untested code. It might be easier to add acceptance tests (e.g. high-level scripts that utilize the application), especially if you want to cover more than small partitions of the application quickly
Re:Test First is not a Test Strategy (Score:2)
A specification is just a statement of a testable way to tell if a program is behaving correctly or not; you can think of it as specifying a characteristic function for the set of input and output pairs of the program.
In set theory (and similar things) there's a notion of two distinct kinds of definitions for a set: the intentional, and extensional.
An intentional definition for a set specifies the characteristic function, like
Fake Ignorance (Score:4, Interesting)
Then if they fail the tests you'll have to discover if the requirements are wrong or if it's the code that's at fault. But at least you'll have something to start from - and you'll probably find some bugs they missed.
That's nice but... (Score:3, Interesting)
With requirements at a typical business level, you could have X totally different systems that meet them (and most better given hindsight). And often that level is as much as you're going to get when the original team has left.
Anyway recreating requirements at a detailed technical level could be a waste of time - because some module could be required to do something stupid by another module. Once you fix things all round, this requirement will be thrown out.
Re:That's nice but... (Score:2)
Not only are there usually no written specs but the code will also have undocumented obscure fixes for particular problems. If you don't know what the problem is you can't test for it. It may be that under special circumstances for a particular customer certain things need to be done so that for most customers ripping out the code to conform to some half baked spec you just thought of will work. This is so common in legacy code its almost a law. Joel had a bit to say on this in his essay on why you shouldn'
do the simplest thing - write tests as u need them (Score:4, Informative)
With legacy code, you just have to start writing tests with the code as you go, writing tests for functionality that you need to understand or review. If you try and take x number of weeks to write test cases, your doomed to fall behind and have obsolete tests when you are dumb.
Also, see Working Effectively With Legacy Code by Michael Feathers --> http://www.amazon.com/exec/obidos/tg/detail/-/013
Re:do the simplest thing - write tests as u need t (Score:1)
White Box Testing Utilities (Score:1)
It works kinda like this: First the tool parses everything and generates a ton of metrics. This will point you to the complex modules of the application. Then it breaks down each function/method in the module into its possible execution paths and turns this into a graph. By looking at the graph, you can see wh
Start With the Documented Requirements (Score:4, Insightful)
I spent several years managing a test team in a Fortune 100 company, and I have seen this situation many times (it's probably the norm, rather than the exception, in industry today).
Let the documented requirements for the code (or product) be your guide. Use those requirements to develop test cases, then design one or more tests that hit all of the test cases.
If there are no documented requirements, then you should ask yourself why you are working there. This situation usually leads to many arguments about what the code/product is really suppose to do, and you'll just become frustrated while you waste lots of time. It's not worth it.
No mod points but insightful (Score:1)
Run the tests and document the results.
Let someone else mod the specs
If n.c's third paragraph applies, you either have to find a managerial ally who will support you as you re-work the design process and local culture to be a bit more rigorous and disciplined. It will be tough, but can also be rewarding.
Re:Start With the Documented Requirements (Score:1)
Re:Start With the Documented Requirements (Score:5, Informative)
Re:Start With the Documented Requirements (Score:2)
Take a look at a book at by Michael Feathers (Score:1)
There is a book that covers this subject well: Working Effectively with Legacy Code by Michael Feathers.
http://www.amazon.com/exec/obidos/tg/detail/-/013Look at Junit (Score:2)
Re:Look at Junit (Score:3, Interesting)
What kind of testing is that? You have to assume that the customer won't do everything right if you're going to find bugs. Just because you're using automated code testing, it doesn't mean that the unit tests themselves have been written correctly or all the code works perfectly together. A good QA team needs to have attitude th
Re:Look at Junit (Score:2)
My suggestion that random banging on the keyboard, pushing buttons, and unexpectedly closing windows would be a good thing was not appreciated because there was no way to write it up as a test plan, or describe it as a repeatable bug.
Re:Look at Junit (Score:4, Interesting)
In the video game industry, that's called button smashing. Programmers hated it because it meant that their input code didn't consider multiple buttons being pressed at the same time, and, worst, it was usually time dependent. Nintendo is very good at finding button smashing bugs.
Do you have automated testing tools available? (Score:4, Insightful)
We used automated regression testing scripts in the mainframe environment I worked in 12 years ago, and that made some aspects of unit testing relatively easy.
Unisys had a tool (TTS1100) which allowed us to record each online transaction entry and computer response and then play it back later, and that made it possible to perform the exact same tests dozens or hundreds of times if needed. We used to run them after each set of changes was applied to make sure nothing broke.
One could also record a single occurrence of a lengthy interactive sequence and then add things like variables and looping structures into the recorded script to automate the handling of various test cases using different values.
Such a tool makes after-the-fact test design a little bit easier because you can sit down and methodically address each and every variation of each and every input field on a given screen.
Of course, the nature of the software you're using might make that sort of thing more difficult, or perhaps even easier.
I've never been able to do up-front unit test design -- specifications can change rather quickly when doing in-house software development, and the overall environment is a lot more dynamic than a typical "software house" environment would be where one always has formal detailed product specs to code to. We're often writing code based on an e-mail or on a couple of phone conversations.
Unit testing is not a goal (Score:2, Interesting)
If you are testing a component try to figure out if it is possible to in some schematic way. If you can figure out a way for the "business" people to write the tests for you that will take a lot of knowledge off your shoulders.
If it is an existing component maybe you could explore if it is possible
Re:Unit testing is not a goal (Score:1)
I couldn't say it any better. If you want to make sure that functionality stays intact while you change things, write system tests. If you want to use code in a different context, document it and write unit tests first.
Whatever you do, you need some "absolute reference" to find out what is right and what is a bug. Tests are no good if they just preserve old bugs for eternity.
Welcome to the real world (Score:2)
In most IT shops, I'm sorry to say, test cases are a low priority and almost always come after the code is complete or nearly complete.
If you're looking for a "methodology" for creating unit test cases, I think you're overthinking the problem. You need to create a set of test caes that assure you that the unit is working in and of itself. There are a number of things you can do to accomplish that:
1. Look at the design document.
See what the design document says the uni
Re:Welcome to the real world (Score:2)
This goes for unit testing as much as it does for integration testing. If the design hasn't be (entirely) proscribed, then it's going to be pretty much (at least partially) invented at coding time -- meaning int
Whhhaaaa! (Score:4, Insightful)
Every testing job I've ever had we've had ZERO documentation. NADA. ZIP.
How do we survive? WE TEST. We put down the book (like we had one to begin with) and we test. Surely you have a server somewhere running dev-level code (at least) and you start poking around. Sure, its less than ideal, but you deal with it. And you bitch about how crappy it is and how it goes against all the principals of so-called 'real world' methodologies.
The thing is, this is how the real world does it.
Sure, in a perfect world, everyone has their shit in order. But in a perfect world we're not all competing against code monkeys working for 1/10th of what we make and that live in a 3rd world country.
Re:Whhhaaaa! (Score:2)
It seems there are very few good programmers. If you're going to get crap anyway, better to pay 1/10th for it
Not saying that I'm a good programmer. Far from it. But heh, even I can do better than the crap I saw.
With just the fixes/rearchitecting of some recent code, I might have justified most (if not all) of my
Re:Whhhaaaa! (Score:2)
I had one developer who got very pissed at me because I did things outside of the test script and caused some errors. He was like "That's not how your supposed to do it" and then showed me that it worked if you did it like the test script. I was like "Ummm I don't think you can count on end users to do it exa
Re:Whhhaaaa! (Score:1)
Reverse the tables and watch them blow their lids.
Of course, they can't think outside the box (doesn't apply to all developers, but a lot I deal with).
Hopefully your manager realized the developer was/is an idiot and told HIS manager as much.
Fortunately, I've got a great manager (former developer) who knows what's what when it comes to testing.
Re:Whhhaaaa! (Score:2)
("We don't tell you guys how to code, do we?" -- well, actually, I make coding/fix suggestions once in a while, but I have some RW coding experience.)
Sorry to hear you had to face the developer-tester impasse in front of your boss before you'd had it explained to h{im|er} beforehand.
Developers' job is to understand how to make the product, testers' job is to understand how it will be used.
Re:Whhhaaaa! (Score:4, Insightful)
You can't possibly ensure that the application does what it's supposed to if no one can communicate to you what that entails. Imagine testing a house by spraying it with water, banging on the windows, and tromping on the lawn. Those all sound like good things, until the future owner tries to open the front door, and can't.
Make sure anything changed is tested (Score:4, Insightful)
A few open source projects have found themselves in the same situation as you, and they seem to work by 3 rules:
1) If you change any code at all which doesn't have a test, add a test
2) If you find a bug, make sure you add a test that fails before, and works now
3) If you are ever wandering around trying to understand some code, then feel free to write some tests
One thing I will say is to try very hard to keep your tests organised. Keeping them in a very similar directory structure to the actual code is helpful. Without this it's very hard to tell what has and hasn't got a test.
My Experience... (Score:5, Informative)
1) A lot of people are going to tell you that you need to write your tests from scratch. That you should assume that your code is broken and work out the expected results by hand and create the test assertions accordingly. I disagree [benrady.com]. If you're testing old code, it's much more useful to use the test to ensure that it does whatever it did before, instead of ensuring that it's "correct". I prefer to treat the code as though it is correct, and build the tests around it. Even if the assumption is occasionaly wrong, you can make the tests much quicker this way. That allows you to refactor and extend your system with confidence, knowing that you haven't broken anything. Remember, TDD isn't really about quality assurance, it's about design and evolving design through refactoring. More tests == more refactoring == better system.
2) You're probably not going to get a lot of extra time to sit around and write tests. You need to captialize on the time that you have and turn problems into oppertunities to add tests. Whenever you find a bug, make a test that reproduces it. If you need to add supporting stub or mock objects, consider making them reusable so that future tests will be easier to write.
3) If you need to add new functionality to the system, just follow the standard TDD steps of Test->Code->Refactor, and make sure that you add tests for anything that might be affected by the change.
4) I'm assuming that you already have a continous integration build that runs the tests, but if you don't, make one. Now. Also consider adding other metrics to the build like code coverage (we use Emma), findbugs, and jdepend. These will help you track your progress and can be very useful if you have to defend your methdology to people who view TDD as a waste of time (The Code Coverage to Open Bugs ratio gets them every time).
5) In general, you need to look for oppertunities to write tests. Don't understand how a module works? Write a test for it. Found a JDK bug? Reproduce it with a test. Performance too slow? Use timestamps to ensure that the performance of a alrorithm is in a reasonable range.
You've probably got a long road ahead, but it's worth the work. Keep at it, and good luck.
Re:My Experience... (Score:1)
Pick up a book by Boris Beizer (Score:2)
You want to do path coverage, statement coverage, bounds checking on the inputs, error conditions, that kind of stuff.
Pick up a book by Boris Beizer, read his stuff and ignore everyone else. I've been in QA and Test for almost twenty years now, Beizer is -the man- to read about testing. If you're really desperate
What about ... (Score:3, Funny)
Re:What about ... (Score:1)
Searching the web on 'Junit VB' you should be able to find test harness freeware.
If there's the possibility of modifying the code organization that would help out a lot, obviously, because then you could make smaller functions doing more sharply defined things and therefore presumably easier to test.
If not perhaps assume you can't test more of the existing code except at
Re:What about ... (Score:1)
Re:What about ... (Score:2)
Hmmm... (Score:2)
No such thing as too many (Score:2)
Re:No such thing as too many (Score:1)
True enough, but if you've written the code in good abstracted OO with the "special rigging" (ie, mock objects) in mind, you'll be much better off. I'd say there's very little complex code that doesn't require mock objects or the equivalent to test, so it's something well worth learning. As you say, once you've got all the groundwork set up, it's much easier to extend.
What I would do (Score:1)
2. Start writing unit tests from lowest level functions: those that use barebones system libraries and start moving upwards. Do not go too far - it's a unit (!) testing, not a general or rgeression testing.
3. Ideally you should test all possible paths in functions. Apparently it is not feasible for large functions (that is one of the reasons function should be small). You should try to create a test to hit every con
Unit test == Code review (Score:2)
And this is actually OK, because it simply means that the "Unit test creation" process is actually a detailed code review process. Expect to find far more bugs from looking at the code than from
Re:Unit test == Code review (Score:2)
Code always does what it says it does. The problem is determining what the code should be doing and testing for that.
Re:Unit test == Code review (Score:2)
Yes. Absolutely. 100%.
Until someone changes it.
When someone changes the code, it can be very enlightening to see the cascade of side-effects (especially if you thought there weren't going to be any!) Having unit tests that merely document the current behavior of code in a format that a machine can rapidly reproduce from the source code (compile unit test, run unit test, compare output to old output) can help you identify... Regressions! Very handy.
The main wa
Re:Unit test == Code review (Score:2)
"Until someone changes it."
No. It still does exactly what it "says" it does, it just "says" something different after the code has been modified.
Regression testing can be very handy as long as the tests reflect what the code is supposed to do. Otherwise passing the regression simply means that your code is consistently failing to achieve its requirements.
Re:Unit test == Code review (Score:1)
Re:Unit test == Code review (Score:2)
Well, there are times when looking at the code is useful but I never claimed you needed to do that to test it. All I'm saying is that a test should be devised to determine if its requirements are met.
"What if it segfaults for some input? Was it supposed too? I've seen code that was supposed to seg fault!"
As a former Atari 2600 programmer, I can appreciate the idea that correct behavior may be quite unconventional (such as performing an index
Re:Unit test == Code review (Score:2)
My only point, and I feel I was abundantly clear on this, is that regression tests provide a record of what source code used to do.
If you like what it used to do, then if the output is different, you can bet you did something wrong.
If you didn't like what it used to do, you can use the output to try to figure out if you changed the parts you wanted to. ("Passing" a regression test is a bad thing, if your test captured the fact that you used to have
Re:Unit test == Code review (Score:2)
Re:Unit test == Code review (Score:2)
Now what?
Regression tests can help pinpoint the results of code changes - for good or bad. If the old behavior was what you wanted, then you undo your changes.
Again - are you being purposefully obtuse?
Re:Unit test == Code review (Score:2)
I don't see any reason why legacy code is less likely to have requirements than new code. In any case, if the requirements are not updated with the code, than you have a flaw in your development process.
You can certainly peform regression tests even if your process is flawed in other ways, I'm not disputing that.
"Again - are you being purposefully obt
Re:Unit test == Code review (Score:2)
That's what we're all talking about here.
You've ignored that from the beginning, and that's what I keep pointing out again and again.
Re:Unit test == Code review (Score:2)
Re:Unit test == Code review (Score:2)
This statement is the crux of our disagreement. I assert that having regression testing is handy, regardless of whether the code is doing what it is supposed to be doing, whether the tests reflect what the code is supposed to do, or basically any other factor.
There are a few factors which make them valuable to me, even in the worst of conditions:
1) It provides kind of a minimal environm
Re:Unit test == Code review (Score:2)
Re:Unit test == Code review (Score:2)
Even a bad regesssion test is better than no regression test.
Working effectively with legacy code. (Score:1)
General testing philosophy (Score:1)
Many replies here are along the lines of "we don't have any documentation" - well in that case, you can't do truely meaningful testing. You can test what you think the code should do, but that'll always be
Re:General testing philosophy (Score:2)
Documenting legacy code that doesn't have test cases without at the same time building a test suite is a nightmare.
Re:General testing philosophy (Score:1)
With no documentation of the existing code, you can write as many tests as you like, but you still can't prove that the code performs as it was originally intended to - you can only prove that the code does what the code does.
Granted that this is of some use for the purpose of creating a regression test suite, but it doesn't alter the fact that your tests can't, by definition, find bugs in the e
I recommend this article (Score:3, Informative)
A recent article in print about automated unit tests for legacy code was
"Managing That Millstone"1 c/sdm0501c.html [sdmagazine.com]
By Michael Feathers
Software Development
January 2005
http://www.sdmagazine.com/documents/s=9472/sdm050
It included suggestions for how to inject unit tests into code which isn't loosely coupled, some tips on how to refactor to get loosely coupled interfaces, & what you can do when neither of those approaches will work. It was a valuable & enjoyable read for me, at least.
gene
Unit Testing not a goal by itself. (Score:1)
If you are doing unit testing on finished code... (Score:2)
Unit testing is for finding bugs early on (preferably design errors, but also coding errors).
If the code is already written and works, then it's not likely to be worth the effort to add random unit tests all over the place. What you need then is either (a) stress testing, to discover hidden bugs, or (b) regression tests, to make sure the software keeps working, even after programmers have "improved" upon it.
Re:If you are doing unit testing on finished code. (Score:2)
Once upon a time (Score:2)
Even worth learning ADL (:-))
--dave
thats simple really (Score:2)
You are so screwed (Score:2, Insightful)
You are so screwed. Writing tests for untested code is a thankless job. You are going to find so many bugs, and everyone is going to get really pissed off about that new hire that is rocking the boat complaining about "quality problems".
You are in a no win situation. They will tell you your tests are too picky, that no one will use it like that. Unit testing is thankless, you can't argue. Given that there was no test plan, I bet there isn't even a spec! Where there is smoke there is fire.
I'd sta
Re:Dial 922 for the WAAmbulance (Score:2)
Read "Working Effectively with Legacy Code" (Score:2, Insightful)
Black box the hell out of them. (Score:2)
Make stubs or some other kind of testing tools, hammer data into them, examine the data coming out.
I guess I don't quite follow the question. You're actually in a better position with already-developed code, for the simple reason that if these things are already developed, they already have a defined purpose (whether that was defined before or after the fact). Your only proble
Re Writing Unit Tests for Existing Code? (Score:1)
1. Identify "enduring business themes" in the code. This means basically a group of code that can be predictably tested by feeding certain input and expecting certain output. For example, you know that if you order two pens, a purchase order for two pens will come out the other end.
2. Once you establish a few of these scenarios you can write a few high-level unit tests. These will help you acertain whether