Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Java Programming IT Technology

Why We Refactored JUnit 192

Bill Venners writes "In this article, three programmers tell the story of how their frustration with JUnit's API led to the creation of Artima SuiteRunner, a free, open source test toolkit and JUnit runner. These programmers simply wanted to create a small add-on tool to JUnit, but found JUnit's design non-intuitive and API documention poor. After spending time reading through JUnit's source code and attempting to guess at the API contracts, they gave up and rewrote it."
This discussion has been archived. No new comments can be posted.

Why We Refactored JUnit

Comments Filter:
  • I wish... (Score:5, Funny)

    by Anonymous Coward on Monday January 27, 2003 @11:10PM (#5172026)
    I could refactor my unit.
  • hrm.... (Score:5, Funny)

    by xao gypsie ( 641755 ) on Monday January 27, 2003 @11:11PM (#5172030)
    why does this sound like something of the beginning of a fantasy story about programming: "in the beginning was Iluvitar, the master of the JU API........then melkor, wishing to expand it...."

    xao
  • by Anonymous Coward on Monday January 27, 2003 @11:15PM (#5172057)
    One of the biggest problems for a testing framework is fitting all the tests you need in it. Data types such as XML make it really, really difficult.

    For example, you have a class that generates an XML document and you want to test it. Which level do you run the tests at? Best to test all levels, of course. How then do you check the accuracy of intermediate results? The answer often is that you can't; the intermediate results are meaningless until you have the complete (or at least almost complete) infrastructure to interpret the data. But once you've run all that, it is very difficult to pinpoint the source of the problem.

    I'm glad that Artima SuiteRunner provides a way to handle all of that complexity and write unit tests that are complete but tractable. I just wish it worked with Perl, since I don't use Java anyway.
    • by Anonymous Coward on Monday January 27, 2003 @11:24PM (#5172094)
      Data types such as XML make it really, really difficult.

      My big challenges are (1) existing code that wasn't designed for unit testing, (2) testing with databases, (3) testing that involves user interaction and (4) testing that involves multi-tasking. I like unit testing, but it is hard to apply to every task.

      • My biggest problem is writing code that works in the first place so that you can even test it ;)
        • by Anonymous Coward on Tuesday January 28, 2003 @01:38AM (#5172660)
          My biggest problem is writing code that works in the first place so that you can even test it ;)

          Ah, you do not understand the zen of XP yet. First you write the test. Then you write the code. Then you correct the code so the test passes. The sun is warm; the grass is green.

          • by Anonymous Coward
            The sun is warm; the grass is green.

            But you, poor Grasshopper, do not truely understand Zen.
          • by thempstead ( 30898 ) on Tuesday January 28, 2003 @04:16AM (#5173060)
            Then you correct the code so the test passes.

            nonono ... you correct the test so the code passes ...

          • Ah, you do not understand the zen of XP yet. First you write the test. Then you write the code. Then you correct the code so the test passes.

            That's extreme programming? Good thing I didn't spend time reading up on this new methodology since I've been doing that for 10+ years. Not for speed, mind you, just to make sure that delivery matches desire.

            • That's extreme programming? Good thing I didn't spend time reading up on this new methodology since I've been doing that for 10+ years.

              You'd probably love XP then. A lot of the XP methodologies (with the possible exception of Pair Programming) make heaps of sense, and it's extremely nice to have them all in one tidy package that you can sell to management.


      • (3) testing that involves user interaction and


        That's what Excpect is for.




        (2) testing with databases,


        Make sure there's static data that is used for regression testing changes



        (1) existing code that wasn't designed for unit testing,


        No such thing. IF it can't be unit tested, then it's really, really bad code. The like of which no one has seen before


        Unit Testing isn't the be all and end all, but these complaints don't make sense.

        • > (2) testing with databases,
          >
          > > Make sure there's static data that is used for > > regression testing changes

          Testing against databases has been one of the largest headaches where I work. When you have many developers running tests at the same time, as well as continuous integration running the tests every hour, maintaining data consistency can be a real headache.

          In the end we opted for a seperate test instance that only continuous integration uses. This contains static data as you mentioned, but where possible it is still best to use the setUp() and tearDown() capabilities. We built a framework that would allow test developers to simply supply SQL scripts for the set-up and tear-down process.

          Developers are still left sharing a database for their tests, and to solve this we simply had to be careful to write tests that were independant of certain properties of the data (such as the number of records returned by a query), and to ensure that each test restores appropriate data in its tearDown sequence. Sometimes tests can still fail when two developers run at the same time, but it is more unlikely, and at least the database finishes in a consistent state.

          Phew!

          • Have you tried using mock objects? If that's not a familiar term, it just means you create some kind of stub object to test with instead of a real database. Depending on an infrastructure for unit testing can be a pain. Been there, done that, got the database is down t-shirt.

            -Kevin

        • by Mike_R ( 21485 )
          Unit testing databases can be simplified by using DBUnit [sourceforge.net].
          With DBUnit, you can describe your database in xml, and each time the tests are run, dbunit creates a new test database starting from the xml files. So you always test on a clean database. Check it out!
        • No such thing [as code that wasn't designed to be unit tested]. IF it can't be unit tested, then it's really, really bad code. The like of which no one has seen before

          Well, that's a pretty gratuitous oversimplification. Obviously, you've never written a lick of code that has to deal with I/O, GUIs, or multi-threading. How exactly do you propose to unit test a low-level class that formats a drive? How about your GUI? What about a class that interacts with DNS's? Or hosts an FTP site? Are you going to create/delete a thousand files on your drive, just to do your test? Are you going to write up a bunch of dummy/proxy network clients to test your socket traffic?

          At some point, once you've gained a little coding experience, you'll learn that if you have to write more testing code than code being tested, than chances are your testing code has more bugs than the original code. You'll realize that there are some cases, albeit isolated ones, where the effort involved in writing unit tests for a particular component outweigh the potential time savings compared to testing it manually and leaving it alone.

          Intelligent, fine-grained class factorization can minimize the occurrence of these classes, but nevertheless, they'll come up eventually.

          • Number one, I've been coding for over 15 years. I have plenty of experience. There is still no such thing as code that wasn't designed to be unit testing. Once YOU gain a little more experience, you'll realize that unit testing is about testing each subroutine as you write it. Yes there can be some difficulties with certain types of code, it can be nerve wracking adding unit testing after the fact, but it CAN be done.


            I've written plenty of code dealing with I/O, GUI's, and multi threading. And I'm smart enough to devise unit tests for all of those applications. Do I always use unit testing, no, is it possible to add unit testing to ANY code. Yeah it is. Talk to me when you've had a few more years under your belt.

      • My big challenges are (1) existing code that wasn't designed for unit testing, (2) testing with databases, (3) testing that involves user interaction and (4) testing that involves multi-tasking. I like unit testing, but it is hard to apply to every task.

        Unit testing is a limited tool that can be very useful for certain things. Some of its limitations can be overcome by spending time writing elaborate test fixtures, but many (like 3 and 4 above) cannot.

        I guess my point is -- don't be afraid to write stupid, obvious unit tests. The point of unit testing is that it allows you to make massive, frightening changes to your code -- and then ensure that, as far as the outside world is concerned, your module still behaves the same way.

        After I got some experience with it, I simply started thinking of it as just another tool in our test suite, along with the Robot-based GUI tests and more complex, human-assisted test code.
    • by Anonymous Coward
      It's called XMLUnit (Google it in 2 seconds). Works great.
    • Which level do you run the tests at? Best to test all levels, of course. [...] But once you've run all that, it is very difficult to pinpoint the source of the problem.

      It sounds like your unit tests are too coarsely grained, possibly because your APIs are too monolithic. Try breaking things down some.

      Generally, I write a couple lines of test code and then a couple lines of production code to make it pass. Then I rarely have problems testing the things I want.

      Feel free to drop me a line if you have questions; I've done a lot of testing of XML and HTML.
  • The truth about XP (Score:5, Interesting)

    by tundog ( 445786 ) on Monday January 27, 2003 @11:20PM (#5172074) Homepage
    I find this quite amusing. I recently attended a talk in Montreal given by Kent Beck, one of the X-programming icons. One of the key elements of XP, as I understood his talk, is cutting down what he calls excess 'inventory' which include things such as excess documentation, architecture, etc. By keeping these iventory elements to a minimum, you get quicker feedback, because you have quicker results, which feeds back into the process and allows you to respond quickly to changing requirments.

    Well, it also happens that JUnit was developed al la XP by Kent and one other guy (didn't pay attention to who, go look it up!) For a while I was thinking that this XP thing might actually be something, but after skimming through the first 5 chapeters of 'Planning XP' coupled with the statements concerning the JUnit API, I'm starting to think that XP really is just one big hot air balloon.

    In his defense though, he did say that refactoring often was a GOOD idea. It's just that he didn't say that you should wait for someone else to do it for you.

    My 2 cents.

    • by Anonymous Coward on Monday January 27, 2003 @11:29PM (#5172120)
      I'm surprised, it almost seems like you didn't read much of the information on the website... like you... maybe... just read the slashdot headline and then formed your opinion... because here's what they guy who re-wrote JUnit said:

      "at one point I just threw up my hands and said to myself, "Why is this so hard? It would be easier to rewrite JUnit than figure it out."

      Just some friendly advice: if you ever find yourself saying it would be easier to rewrite something rather than figure it out, slap yourself. Chances are you are wrong. I certainly was. Creating SuiteRunner was a huge amount of work. Despite my frustrations with JUnit, it would have been orders of magnitude easier to decipher JUnit's source code than to create a brand new testing toolkit."
      • by babbage ( 61057 ) <cdevers@cis.usou ... minus herbivore> on Tuesday January 28, 2003 @12:39AM (#5172429) Homepage Journal
        On the other hand, there's Fred Brooks' advice from The Mythical Man-Month [aw.com]: "Plan to throw the first one away. You will anyhow."

        Sometimes, it really is easier to treat the first one as a prototype of what you'd really like, and then write that second one correctly, from scratch. Witness Mac Classic -> Mac OSX, Win9x -> WinNT, Perl5 -> Perl6, etc.

        A lot of these prominent "next-generation" systems may share ideas & even code from their predecessors, but the essential point with each is that they all represent deliberate jump-cuts from the past. Sometimes it really is easier and better to rewrite something, whether or not you fully grok the original.

        The real trick is to design systems well enough that, when someone comes along with better ideas, the framework you provide is robust enough to adapt. Mac Classic, Win9x/DOS, and Perl5 are all too inflexible for future use, though at this point each of them is still useful to certain groups. Their successors, however, are all designed with future expandability in mind, so that the "second system curse" can hopefully be avoided. History will tell if it ended up working in any of these cases...

        • by tupps ( 43964 ) on Tuesday January 28, 2003 @12:49AM (#5172464) Homepage
          I think comparing operating systems as a way of throwing out your first born is not correct. Eg MacOS Classic was a direct descendent of the original MacOS that launched in 1984. I am sure that there is very little or no code from the original. Win9X was built on top of DOS which I believe has a lineage older than MacOS classic. Interesting note that both MacOSX and WinNT both came from an external base (NeXT and OS2 sort of). If you want a better example IE 3.0 --> IE 4.0, Microsoft is a classic for working out the problems in the first version or two and then releasing a quality product.
        • by WaKall ( 461142 )
          If YOU write the first one, throw it away, and write the second one, then yes, you may be on to something. However, if someone else writes the first one, and you can't understand it/use it, it doesn't that the first solution wasn't good. Maybe the problem lies somewhere else?

          You don't learn as much from reading someone else's code as they did from writing it. Don't try mis-quote Brooks to justify not understanding an existing product. Brooks was referring to large-scale team oriented development (like the System 360, or IBM 650 probjects). He wasn't claiming that you should throw out someone else's code because it was the first solution to a problem.
        • Actually I think perl5 was a rewrite and perl4 was thrown away. So they are throwing away two.

          But I think Brooks was talking about implementing the same specification, while perl5 is a much more powerful and grown-up language than perl4, and perl6 vs perl5 may turn out to be the same. So they are throwing away (or heavily modifying) the specification too, which is different.
        • Sometimes, it really is easier to treat the first one as a prototype of what you'd really like, and then write that second one correctly, from scratch. Witness Mac Classic -> Mac OSX

          I can't imagine how MacOS Classic could be considered a prototype for OS X (read 'NEXTSTEP').

          Win9x -> WinNT

          WinNT is older than Win95.

          Perl5 -> Perl6

          I thought you said the second try was supposed to get it right ;-)

          Mac Classic, Win9x/DOS, and Perl5 are all too inflexible for future use

          MacOS I can see, and maybe DOS (it's amazing how much mileage people get out of that system), but Perl5? If Perl has ever had a problem, it's being too flexible in too many ways.

          Their successors, however, are all designed with future expandability in mind, so that the "second system curse" can hopefully be avoided.

          Of course that sort of expendability and flexibility may well be a second system effect itself.

          Planning for future is a good idea, but in practice you can't do it beyond certain bounds -- something totally unanticipated is going to come along and screw everything up sooner (I would say 'or later', but it never is). What you can do is shoot yourself in the foot trying too hard to accommodate every possibility. Better, I think, to accept that you'll have to throw away every design eventually, and plan accordingly.
    • Origins of XP (Score:5, Interesting)

      by Anonymous Coward on Monday January 27, 2003 @11:30PM (#5172123)
      I'm sure you've also heard of a guy named Fowler... Martin Fowler is well known in the community for exactly two reasons.

      The primary one is his book Refactoring. It describes his experience as a consultant refactoring medium sized software projects, and gives a lot of advice on methods for refactoring.

      Apparently Fowler decided that refactoring is a good thing. Not just that, he decided that since refactoring is a good thing, and so should be the basis for programming, since most of the results of programming are bad. By that I mean just that most programs suck, not that they are evil or anything.

      At that point he joined forces with Beck and formed his second reason for being well known, XP. As far as I can tell, the philosophy of XP is, "Software usually ends up sucking and in need of refactoring after it has been extended too far. Why wait for it to be extended too far? Just make it suck in the first place and refactor all the time!"

      It does have a sort of perverse logic to it, but when I say that XP is bad, I don't just mean that it sucks. As well as sucking, XP really is evil.
      • Re:Origins of XP (Score:5, Insightful)

        by Anonymous Coward on Monday January 27, 2003 @11:37PM (#5172157)
        XP is about writing code that can easily be successfully refactored. Total-design-up-front bigots think the code "sucks" because the design wasn't much more general than actually needed at the time, not realizing that making a design at the start of the project that meets the requirements as of the end of the project is almost always impossible, and all the effort in doing so is going to be wasted.
        • by 2nesser ( 538763 ) <2nesser@NOSPaM.cogeco.ca> on Tuesday January 28, 2003 @01:02AM (#5172506) Homepage

          making a design at the start of the project that meets the requirements as of the end of the project is almost always impossible

          I believe that this quote is true for commercial software. The word processors, email clients and web browsers in the world should not be designed at the beginning of a project. The cost of doing that far outweighs the benifits. But...

          When you are designing life critical applications (fly-by-wire, ABS, pacemaker, cancer radiation machine...etc) it is not acceptable to be redesigning while coding.

          If you cannot model the way a system must react at all times, under any input, then what makes you think that your software should be responsible for someones life.

          More and more software is being designed for life critical applications. If XP is being used to develop them, I'm worried. You cannot stumble onto 100% correct software. In a large system it takes a lot of money - an exponential curve, the last 3 bugs will likely take more money than the first 100 - time and a good design.

          • by chromatic ( 9471 ) on Tuesday January 28, 2003 @01:24AM (#5172611) Homepage
            In a large system it takes a lot of money - an exponential curve, the last 3 bugs will likely take more money than the first 100 - time and a good design.

            That assumes that the rate of the cost of change rises over time. XP rejects that assumption, and the XP practices are designed to keep the rate of the cost of change consistent.

            • Here is an honest question, I know very little about XP, but how does it keep the rate of change consistent with all this "refactoring" going on?

              As your code base increases from cycle to cycle and more features are added to the system do you not end up spending most of your time "refactoring" your old code so that the new feature fits into the system?

              • That's a good question. Several of the other XP practices work in your favor in that case.

                First, if you've been doing the simplest thing that could possibly work, your code is already pretty simple -- it doesn't have superfluous stuff to work around. Second, if you're testing continually, you have a safety net to notify you if you accidentally break the essential behavior. Third, if you've been refactoring continually, the code will be decoupled and much easier to change. Aggressive testing helps this too. Finally, if you're using the planning game to work on half-day features, you're dealing with manageable chunks of code that very rarely touch more than a couple of individual units.

                There are times when you have to schedule a couple of days for larger refactorings, but they're not common. If you have intelligent developers who keep up the good practices, the code ends up really clean, really easy to extend, and surprisingly well-designed.

            • Mission Critical software, such as for example, the Space Shuttle, falls under the Software Engineering banner, not so much the Software Development banner that is most of what's released commercially and internally.

              The rules for Software Engineering are to assume that things won't change (there's hardware involved that has to meet the same predesigned specifications that was supplied to the software vendors to integrate with), and that changes have to be approved at several levels before being applied.

              Its slow, messy, bureaucratic, and its the reason we haven't lost a Space Shuttle for a computer bug. Would that the Mars missions were so specific -- consider what happened when what could/should have been caught by a design and code review and unit and integration testing (usually required in software engineering as well as XP) in a simulator (a metric->english non-conversion).

          • I have never used XP in real life, since most of the projects I have worked with have been either been waterfall or lacked an active customer input in the XP sense.

            However, I have read a few XP books. The authors normally stress that XP can not be applied to all kinds of projects. One such kind is certainly life critical applications.

            In 'Agile Software Development', Alistair Cockburn states that all agile methodlogies should be applied with extreme care if at all when it comes to life critical applications.
      • Re:Origins of XP (Score:4, Informative)

        by Anonymous Coward on Tuesday January 28, 2003 @12:26AM (#5172391)
        At that point [Fowler] joined forces with Beck and formed his second reason for being well known, XP.

        Martin Fowler did not invent XP. It originated in the work of Ward Cunningham and Kent Beck, then was refined by Kent Beck with help from Ron Jeffries and other members of the original XP team. Martin Fowler is an active part of that community, so he co-authored one of the XP books.
      • I agree. XP is like music in the ears of managers who handle projects and developers which already suck (BTW, Kent Beck is a consultant). XP is not meant to be applied by good developers.

        In order to write good (stable, maintainable, extendible, generic, efficient, ...) software efficiently, you want to minimize the overhead (tests, refactoring, redesign, debugging). This can only be achieved by good upfront design skills and applying them as early as possible.

        If the code is well written, there's few need
        for writing tests (except for sophisticated algorithms), almost no need for refactoring (except for really simple ones like moving/renaming), redesign or debugging.

        Unfortunately good OO design is a hard to learn skill and I haven't seen a good book on this yet.
      • "his book Refactoring... describes his experience as a consultant refactoring medium sized software projects, and gives a lot of advice on methods for refactoring.

        That's not even close to an accurate description. Refactoring describes what R. is, why it's done, what you need in order to do it, and how to do some common types.

        At that point [Fowler] joined forces with Beck and formed his second reason for being well known, XP.

        Even more wrong. XP was developed from the conventions and experience of a lot of Smalltalk developers over quite some time. Fowler and Beck didn't invent or develop it; they came in after it was initially publicised.

        As far as I can tell, the philosophy of XP is, "Software usually ends up sucking and in need of refactoring after it has been extended too far. Why wait for it to be extended too far? Just make it suck in the first place and refactor all the time!"

        That's an interesting caricature, actually. There's more truth in it than in anything else you've said. XP does say that software usually ends up sucking because it's been extended too far without refactoring; so why wait for it to be extended too far? Refactor so it won't suck NOW.

        The conclusion you imply (that after the refactoring the code will suck worse) is blatantly wrong, of course -- but that's just a troll.

        Perhaps a better way of looking at XP is to observe that it tries to pull the software into maintenance mode as soon as possible, and make change as cheap as possible within maintenance mode.

        -Billy
    • by Anonymous Coward
      I don't think the Eclipse project (www.eclipse.org) is a "big hot air balloon". Oh BTW the other guy is in this project too. Take time to analyse what are you talking about before giving your 2 cents.
    • by mgrant ( 96571 )
      The other guy was Erich Gamma [c2.com], of Design Patterns fame.
    • by dubl-u ( 51156 ) <2523987012@@@pota...to> on Monday January 27, 2003 @11:53PM (#5172248)
      but after skimming through the first 5 chapeters of 'Planning XP'

      If you're a programmer, that's the wrong book to start with. Read Extreme Programming Installed [amazon.com], by Jeffries, et al.

      I'm starting to think that XP really is just one big hot air balloon.

      In my experience, it's not. Since adopting it a couple years back, my designs are better, my bug counts are much lower, and I am much happier. Feel free to drop me a line if you have questions.
    • This brings to mind the most annoying thing you can say to a doctor:

      Physician, heal thyself!
  • Intuitive (Score:5, Insightful)

    by nettarzan ( 161548 ) on Monday January 27, 2003 @11:30PM (#5172121)
    Is intuitive design necessarily good in programming.?
    Recursion is intuitive but not necessarily efficient in terms of performance.
    At what point do we decide which is better intuitive design or efficient design.
    • Re:Intuitive (Score:3, Insightful)

      by dangermouse ( 2242 )
      Recursion is not a design principle, it's an implementation decision-- like choosing between a switch block and cascading if/else blocks.

      The author of this article is talking about the JUnit API. An intuitive API is always a good thing.

    • by Black Parrot ( 19622 ) on Monday January 27, 2003 @11:55PM (#5172254)


      > Recursion is intuitive but...

      You've obviously never tried to explain recursion to a group of average co-workers.

    • Re:Intuitive (Score:3, Insightful)

      by iabervon ( 1971 )
      Intuitive deisgn of anything that people will need to change in the future is more important than anything else, because anything you do efficiently but confusingly now will get changed in the future to be broken. If a future programmer can understand your code easily, it can then be optimized if it turns out to be too slow. And good design often turned out to be both efficient and intuitive; it tends to be free of the junk that both makes things confusing and slow.

      For that matter, how fast do you want the testing infrastructure to run?
    • Re:Intuitive (Score:5, Insightful)

      by dubl-u ( 51156 ) <2523987012@@@pota...to> on Tuesday January 28, 2003 @12:08AM (#5172319)
      Optimization should never be done until after you've run real-world use cases on code with a profiler. Until then, one should write code that clearly communicates its intent.
      "Premature optimization is the root of all evil."
      says Knuth, and Martin Fowler says
      "Any fool can write code that a computer can understand. Good programmers write code that humans can understand."
      Remember: Computers double in speed every 18 months, but programmers stay just the same. If we want our code to live a long and happy life, then clarity should almost always win over speed.
      • Bah, this is the kind of content-free catch-all argument that sounds good but ultimately wrecks more software projects than GOTOs and ineffective management put together.

        A program that lacks clarity isn't necessarily fast and a fast program does not necessarily lack clarity. What happens instead is that people tend to confuse clarity with abstraction. It's the abstractions that slow things down.

        If you want your code to live a long and happy live then your code has to provide value to the people who'll be using it. It has to be fast, featureful and bugfree. The Linux kernel, the Apache HTTP server, and the MPlayer movie player showed us that a pragmatic approach trumps any and all development orthodoxies.

        The Mozilla browser showed us something else.

        Worry about performance. Worry.
        • 'kin right.

          Superfluous layers of abstraction are the real enemy today. Too much flexibility, too many layers in the class library, not enough code that actually *does* stuff.

          Permature optimization is not the enemy any more. When Knuth complained about it most people wrote in assembly code and it was a concern. He was talking about things squeezing out an instruction by reusing the value in a register in a counter-intuitive way. Since OO became the dominant paradigm we have gone too far in other direction. The great thing about optimization is it causes people to focus back on what they are actually trying to achieve instead of concentrating on their over elaborate badly design class hierearchies.
        • The Mozilla browser showed us something else.
          It showed us exactly what? It is more standards compliant than any other browser out there. Its fast enough. It runs on more platforms than any other browser out there. It took less time to write than its direct competitor (IE6 took [blooberry.com] over six years, Moz took [netscape.com] less than five years).

          More than anything else, Mozilla's well thought architecture managed to build a WORA platform with a strategy different from Java's -- with better quality for desktop apps, I dare say.

        • ... but it is not the whole story. The rule against premature optimisation exists for a good reason: most programmers have terrible intuition about performance. The one thing you should do up-front is get the architecture - the distribution of code over machines, processes and loosely-coupled modules - right, so that it does not contain performance bottlenecks that will be impossible to get rid of later. Prototypes work well for this, as do small applications that are structurally similar to the one you want.

          After that, all performance optimisation should be left until you can profile the application. Bits of the result will undoubtedly suck and need to be rewritten, but I'll put money on most of them not being the ones you expected.

          This is where abstraction comes is. Abstractions are a very important aid to clarity, because they allow parts of the system to be considered in isolation from one another. To give a slightly practical example, a product I recently worked on used a variant of the banker's algorithm to avoid deadlocks. An odd choice, I know, but it was absolutely imperative that the system shouldn't dealock, and killing or restarting threads was unacceptable. The algorithm is complicated, and the original implementation took the simplest possible approach. When we profiled it, this turned out to be too slow, so I rewrote it, but because it was reasonably isolated from the rest of the system, I could do this without touching any other code.

          Good abstraction is not the enemy of performance. It is its friend. In good abstractions, the data and code that need to work together, live together, and are shaped to one another. Oddly enough, this is what you need for performant software, too. On top of that, good abstractions make code easier to change.

          What is the enemy of performance is bad abstraction. Which means too little or the right amount in the wrong place, just as much as too much. Aspects of Mozilla's design show signs of bad abstraction. In particular, it is hard to develop good abstractions for cross-platform UIs, and Mozilla's attempt is no exception. Using the HTML/XML renderer and JavaScript to implement the UI is a clear example of Bad Architecture, see paragraph 1, even if it is convenient. Similarly, the need for the elaborate modular design is debatable.

          The design of the Linux kernel, is in many ways an example of Good Abstraction. Although it isn't written in a language that particularly facilitates abstraction (it is an OS kernel, after all), the different components are reasonable separated out. There don't appear to be direct dependencies between the IDE driver and the memory allocator, for example.
      • If Knuth had believed that the way you're making it look, he'd never have written The Art of Programming. What's the point of a better sort if you shouldn't optimize until you profile? "Oh, my profiler says this bubble-sort sucks. I guess I'll just recode it in assembly."?!?!? I don't think so.

        Premature optimization is a great evil, true. But it's just as bad to optimize too late. Before you even start programming, you need a language that will deliver the performance you need.

        As you're programming, design your program to be efficient. Don't write everything in assembly from the start, but avoid useage patterns that you know will be slow.

        Then, once it's written, you can profile. But you'll likely find it's easier to change the design than to speed up the existing design.
        • If Knuth had believed that the way you're making it look, he'd never have written The Art of Programming.

          Oh, please. Knuth is also a strong advocate of Literate Programming [stanford.edu], the notion that code should be treated first as a piece of literature.

          There's nothing inconsistent here. Nobody, not me, not Knuth, not Fowler, is avocating stupid programming. Yes, you should be a master of algorithms, and yes, you should design so that later optimization is possible. But that shouldn't get in the way of writing clear, readable code except when you have the numbers to prove that performance is a real-world issue.

          And I will assert boldly that most of the optimization people do without profiling first is just a waste of money. I've never had to throw out inherited code just because it was too slow. Instead, the stuff I chuck is because it's too big a pain to work on.
  • by ledbetter ( 179623 ) on Monday January 27, 2003 @11:33PM (#5172133) Homepage
    Since until now JUnit really was the only game in town for java test-writing, I'm so impressed to see these guys put out something that's still compatible with it!! Even if it was frustration with JUnit that gave them the inspiration. All of us have a bunch of JUnit tests for our code (ok, well SOME of us do), and it's nice to have the option to try out another framework without having to refactor our tests.

    Really, in the world of open source, and free software way too little attention is paid to compatibility. Why not be compatible with your "competition"? It's not like you're competing for customers or market share, or any of that crap. We're all on the same team!
  • Read the tests (Score:2, Interesting)

    by Anonymous Coward
    Well, reading the source is good. But in XP the real "API documentation" are the tests for the system. If you can't understand the tests, you can't understand the program.

    Full disclosure: I haven't screwed with the JUnit internals. But I'm working on a system that has over a million lines of code and JUnit 3.7 works just fine for it. Thousands of tests, too.
  • Explaining one's actions in a detailed and public manner is self-incriminating.

    I wonder what really went on behind the scenes here.
  • refactor == rewrite? (Score:5, Interesting)

    by _|()|\| ( 159991 ) on Monday January 27, 2003 @11:39PM (#5172172)
    I'm the first to admit that I'm not buzz-word compliant, but I was surprised to read the following:
    [Artima SuiteRunner] is a design fork not a code fork, because we didn't reuse any JUnit code. We rethought and reworked JUnit's ideas, and wrote Artima SuiteRunner's code from scratch.
    I thought refactoring was massaging, even rewriting parts of existing code. So, is Linux a "refactored" Unix?
    • by Anonymous Coward
      Code refactoring is modifying code while preserving the same overall behavior; I suppose if you keep at it eventually all the original code will be gone. Design refactoring is presumably modifying design while preserving the same end results....
    • by dubl-u ( 51156 )
      Refactoring is improving the design of existing code. So yeah, they're just going for buzzword value here.
    • by bvenners ( 550866 ) on Tuesday January 28, 2003 @01:56AM (#5172720)

      I interviewed Martin Fowler, author of Refactoring, last year. In this interview [artima.com] I asked him to define refactoring. He said:

      Refactoring is making changes to a body of code in order to improve its internal structure, without changing its external behavior.

      We didn't refactor JUnit's code, because we didn't start with JUnit's code and make changes. But we did do what I consider a "refactor" of the design. We started in effect with JUnit's API, and made changes. We started with knowledge of what we liked and didn't like about JUnit's API, and refactored that.

      Where you can see the refactoring is not in the code, but in the JavaDoc. Compare JUnit's JavaDoc [junit.org] with SuiteRunner's JavaDoc [artima.com]. I'm guessing it was version 3.7 of JUnit that I had in front of me when I decided I would start over. It may have been 3.6.

      • The junit docs document not only the framework, but the samples, and several implementations of TestListeners - an apples and apples comparison would be with the junit.framework package.

        Focussing in on this, it looks like you've dropped the Assert class (presumably this is only for jdk1.4?). TestCase and TestSuite have been merged (seems sensible enough), and you've also provided a concrete Runner (no equivalent in the junit framework). This leaves the refactoring of TestResult etc into Report (roughly speaking).

        Can you explain the advantages of what you've done in that last bit? I don't see that refactoring as resolving the classloader issues you mention in the article - they'd be handled in Runner/TestListener implementation.

        Thanks,
        Baz

        • I'm planning on publishing a series of articles that delve into the differences between JUnit's and SuiteRunner's APIs. Since both are rather small APIs, I think most readers will be able to grasp the APIs enough to understand the design tradeoffs being discussed. One of the reasons I decided to go ahead and finish and release SuiteRunner was so I could use it to get discussion about design going in the Artima Forums. In short, though, you are about right. The apples to apples comparison would be between junit.framework plus all the runner packages and org.suiterunner.

          JUnit's TestCase, TestSuite, Test, and Assert ended up being collapsed into Suite. In the process we collapsed the dozens of assertThis assertThat methods into two verify methods.

          JUnit's TestResult corresponds to SuiteRunner's Reporter, but whereas TestResult is a class, Reporter is an interface. Different implementations of Reporter can report results in different ways. One implementation of Reporter we have internally holds multiple other Reporters and forwards reports to all of them. That allows you to combine multiple reporters in a single test run. You can kind of get at the same thing in JUnit by registering TestListeners with a TestResult.

          Anyway, I'll eventually go into details in a series of articles.

  • i'd like to check out artima suiterunner, but i don't think one should have to register to get access to source. at the very least, we should be able to browse the source online.

    don't get me wrong - i'm happy the code is open. we use junit for our open-source product and anything that could help us better utilize junit is much appreciated.
  • by shamir_k ( 222154 ) on Monday January 27, 2003 @11:42PM (#5172198) Homepage
    I have recently begin to use JUnit and ran into problems right away with the classloading mechanism. If any of you have wierd problems with inheritance relationships breaking, try modifying the excluded.properties file inside the JUnit jar. This worked for me.That said, it would have been nice to be able to modify this without having to unjar anything.
    However, I wonder why they rewrote it from scratch. Wouldn't it have been simpler to redesign the problematic parts. And I also find it interesting that they have written their project to be compatible with existing tests. Does that means that JUnit's interface is not problematic, only the implementation? Seems to me like more of a case for JUnit 4.0 then a complete re-write from scratch.
    The follow up articles should be interesting

  • Paradigm shift? (Score:5, Insightful)

    by HisMother ( 413313 ) on Monday January 27, 2003 @11:58PM (#5172270)
    This is not a troll, but I'm struggling to try to understand why these guys had to work so hard to accomplish what they needed. By their own account, they were driven towards this because they wanted to write API conformance tests -- tests that pass if an API is implemented, regardless of whether it works or not. They wanted, I guess, to see a test pass for each API from the specification that was actually implemented, so they could compute a percentage.

    While I can understand what they wanted to do, I guess I don't see why they didn't do this the much easier and (to me, anyway) more obvious way: you write code that uses the API, and if it compiles, then the API is implemented. You write one tiny test class for each API. You have each JUnit test run a compiler over one single class, this proofing each API. This isn't exactly rocket science: you have a directory full of probe-class source code, a single method that tries the compile, and then any number of TestCases that each try to compile one class and pass if they succeed. This is hard?

    If you use the unpublished APIs to "javac," you could do this at lightning speed. Alternatively, you could just use a Java parser like the one that comes with ANTLR (since you don't actually need to generate any code) and do it even faster.

    • Re:Paradigm shift? (Score:4, Interesting)

      by Osty ( 16825 ) on Tuesday January 28, 2003 @12:18AM (#5172355)

      And if the methods are there because they have to be to implement an interface, but they return some "Not Yet Implemented" error or exception? How do you handle that with your compiler? I agree with you that they could've accomplished their goal in a much simpler fashion (one test for each API that only fails if NYI is thrown, regardless of any other exceptions or error codes). However, that doesn't mean they shouldn't have re-written the software. They identified issues with extending the existing software, and if they had problems then surely plenty of other people are having problems as well. Assuming that they weren't on a schedule so tight as to prohibit it, they should then do the "right" thing and fix what's broken not only for their current use, but also for their future use and the use of others.

      • >And if the methods are there because they have to be to implement an interface, but they return some "Not Yet Implemented" error or exception?
        If you read the article you'll see that for this part of the testing they don't care at all what the methods do, only that they're there. Other tests would confirm the functionality.
        But anyway, my original question was answered nicely by several of the respondents above: my scheme doesn't check for additions, only omissions, and furthermore in most languages it doesn't really check for signatures, either, especially return types.
    • Re:Paradigm shift? (Score:5, Informative)

      by Mithrandir ( 3459 ) on Tuesday January 28, 2003 @12:58AM (#5172492) Homepage
      I am a spec writer, I don't play one on TV. I authored the EAI for VRML97 and the various programming APIs for X3D. These are ISO specs, so they come with a lot of weight behind them from the conformance perspective.

      As someone who is involved in specification writing of APIs I can tell you that a "compile test" is wholly insufficient for checking class/method signatures. (for this reply, I'm using method as being interchangable for any message passing system - C functions, Java methods, RPC calls whatever)

      The first and major problem is not that the methods exist - everyone can cut and paste the spec document - but ensuring that nothing else exists. The bane of every spec writer is the company/individual that decides that they don't like the spec and then proceeds to extend it with their own methods within the same class - often overloading existing method calls. It is these extensions that a spec writer dreads because the (often) clueless end user makes use of them and then wonders WTF their code won't run on another "conformant" implementations of that spec.

      Checking signatures is there for one thing and one thing only - making sure the implementors don't embrace and extend the specification in a way that is not permitted by the spec. Creating derived interfaces with new method signatures is fine, adding new method signatures to the specification-defined collection is not. A good conformance test suite will look at everything about the signatures to make sure everything is there, and nothing that should not be is not there.
      • Re:Paradigm shift? (Score:5, Insightful)

        by The Musician ( 65375 ) on Tuesday January 28, 2003 @01:20AM (#5172589) Homepage
        To help answer the grandparent's question: compile != matches API.

        Can you write a test-compile program that checks whether the proper public/private/proctected access modifier keywords are set up right on each method? That the method was specified for 'short foo' but the implementor used 'int foo'? Or like Mithrandir said, that no additional methods have been added (such as an overloaded version)?

        Basically, compile-compatability is not the same as using the precise APIs in the spec.
      • > The first and major problem is not that the methods exist - everyone can cut and paste the spec document - but ensuring that nothing else exists.

        Understood. But then the question was why, in this particular instance, JUnit was insufficient for that purpose.

        I mean, its Java. You can use reflection to figure out everything you would need about a class, including any nonconformant methods it may have. Just build a collection of methods that ought to be in the API, get the collection that actually are in the API, and if there's any difference, fail.

        The article claimed that JUnit's assumptions about the classloader were incompatible with Jini. I can believe this -- I've encountered a lot of weirdnesses with JUnit's threading model (my most recent project involves a lot of asynchronous communication between peers). Really stupid things too -- like a test will run in a different thread when you fire off a batch of them, but if you run a single test it works in the AWT thread and can lock your GUI.
  • by Anonymous Coward on Tuesday January 28, 2003 @12:01AM (#5172284)
    This is one reason I like OSS--when people are not satisfied with currently available solutions, they can take the code and improve it. I have not looked at the new code (and for that matter, am unfamiliar with JUnit), but I think allowing people the freedom to take code and improve it one of the best features of OSS.
    • That should be possible for people with unlimited spare time and huge bank accounts. For people with neither of those, or companies that needs to make money to pay their employees so that they can eat next month it is not true either. Having the code doesn't magically make things better.

      If I would have the choice between having or not having the source, I would go for having it. I wouldn't do much with it, execpt sending better bug reports, unless I was activly using it for development (as I work as a developer). But this doesn't change the quality of the binaries one bit.

      And the time it takes to rewrite a component is only reduced marginally by having the code for the component that failed you. Especially if it is very large. Let's go for some OSS examples.

      We've got Apache, and the company using it feels that it is failing them. The performace is just not good enough for them. They have the options to either rewrite, and this will take some quite considerable time, not to mention testing the server, writing plug-ins, etc. If they are not quite big they can't afford one, two, three developers working on this fulltime with no income for them. Developers don't come cheap, not by a longshot.

      The other option is to buy/switch to another server. It might be a free alternative (written in the most buzzwordy language of the day) or it might be ISS 6 coupled with Windows Server 2003. Either way it is much cheaper than a rewrite. Rewrites are very expensive.

      Open source is a good thing, and it surely can help in debugging and sending bug reports. Let's not get ahead of ourselves and say that it is bad. Many a time I have wanted the full source to windows to be able to find out what the heck is happening to my events, just to bring up one example.

      But, and this is a big but, opening the source doesn't make it better. And well written close source software will still be better than a badly written open source one. Just as buying software often is cheaper than building it yourself.

      The obvious exception is when software is written without monetary gains ofcourse. Be it that their employers help out, they live of other money (rich, student, won the lottery, begs for food), or similar. But then a big rewrite still costs time, which could have been spent differently.

      (But I must confess that I personally think that putting so much time into Mozilla when a rewrite (Konqi) took so much less time, is kind of insane. And any rewrite which makes it easier to develop software is a good timesaver)
      • forgoil (104808) said:
        We've got Apache, and the company using it feels that it is failing them. The performace is just not good enough for them. They have the options to either rewrite, and this will take some quite considerable time, not to mention testing the server, writing plug-ins, etc. If they are not quite big they can't afford one, two, three developers working on this fulltime with no income for them. Developers don't come cheap, not by a longshot.

        The other option is to buy/switch to another server. It might be a free alternative (written in the most buzzwordy language of the day) or it might be ISS 6 coupled with Windows Server 2003. Either way it is much cheaper than a rewrite. Rewrites are very expensive.

        You're making some assumptions there. First, that the costs of converting your existing codebase is less than the cost of finding and fixing the problem. If you're using a plug-in that isn't as well supported on the new server, then you could be trading one expensive rewrite for another (rewrite part of your server vs conveting all of your .PHP as .ASP). Second, that there are faster servers. Whether you're using Apache or IIS, you should first go to the appropriate news group and look for help. Taking the example of performance problems with Apache, you might find out that you should have multiple servers, some optimized for static content (i.e. images) and others for dynamic content.

        Open source tends to have better support via news groups because that's the only means of support. Yes, there are news groups for everything that MS makes, but many of the people buying those products tend to first go to published books and then consultants if/when they have a problem. The length of the publishing process guarantees that books will have incomplete or even misleading information, while consultants are frequently only as good as their last assignment is relevant.

        • Jupp, I made some assumptions to make my example work. There are ofcourse the options to try to fix the software, and in the case of something as well used as Apache there are certainly people interested in fixing it.

          But if nobody at Apache(etc) wants to fix it, you will have to do it yourself, and many small companies don't have the money to spend for that. My point wasn't that Apache somehow sucks or can't be fixed, only that having the source is no garantee for it being any good, or that it can be used to fix the problem at hand. I don't like that assumption.

          But as far as Apache goes, valid points.
  • I tried too.. (Score:2, Interesting)

    by spludge ( 99050 )
    I also spent many hours deciphering the complicated way that JUnit goes about things so that I could extend it to do what I wanted. In the end it turned out that I could do it (adding a method of timing only certain parts of each test in a generic way) relatively easily but it took many hours of head scratching to understand the framework. JUnit is very flexible, but the code is a bitch to understand :)

    Overall though it seems like mostly a documentation issue, not a design issue. Good documentation for the internals of JUnit is pretty non-existent from what I could find. I discovered a lot mostly from examining other JUnit extensions. With some good documentation on JUnit internals and documentation on the internal flow of operations I think I could have hugely cut down on the time I needed to figure out where to plug in. I would be willing to take a crack at this documentation, but I am *definitely* not the best person to do it :) However if anyone else is interested I would be willing to give it a go?

    • Re:I tried too.. (Score:4, Informative)

      by bvenners ( 550866 ) on Tuesday January 28, 2003 @01:02AM (#5172510)

      I also found that JUnit's documentation was poor, but it wasn't just that. It didn't seem to be designed as a user-friendly API, where the user is a programmer trying to use it as an API. As I wrote in the article, I really liked using JUnit as an application, once I got it going. It was when I went in to integrate my signature test tool with JUnit as an API that I really ran into trouble. The API documentation was very sparse and frustrating. I started looking at the code, which was confusing not just because of poor documentation, but of non-intuitive (to me, anyway) organization.

      I was able to figure out what the code itself was doing, but then I was left guessing at what the contracts of the types and the methods were. When you decipher code, you understand one implementation of an interface, not the abstract contract of the interface. I got so mad at it that I opened IntelliJ and started writing my own testing toolkit. I thought it would be fairly easy, but I was wrong. I underestimated how much functionality JUnit actually has in it. And although I think we did come up with a much simpler public API, it took a lot of work to simplify it. So I ended up with more appreciation of all the work that went into JUnit.

      I suspect that a lot of JUnit's bulkiness comes from the fact that JUnit started with Java 1.0 or 1.1 and evolved over time in wide-spread public use. Because it was so popular, any design blemishes couldn't be erased in later releases. Eventually enough such blemishes accumlate and it makes sense to start over.

      • Could you explain what you mean by "you understand one implementation of an interface, not the abstract contract of the interface"? I'm not sure what you are getting at there..
        • Re:I tried too.. (Score:2, Informative)

          by bvenners ( 550866 )

          A contract for a method in an interface doesn't say what the method does, it draws a fence around the full range of possible things that method could do. So for example, Collection.add method has the contract:

          Ensures that this collection contains the specified element.

          That leaves room for an implementation class, such as HashSet, to reject objects passed to add that already exist in the set. The Collection.add contract is abstract. It draws a fence around the full range of legal behavior for the add method. HashSet.add is a concrete implementation that is like a single point inside the Collection.add fence.

          Imagine stumbling upon the code to HashSet.add without the benefit of the detailed contract-oriented API documentation of Collection.add. You want to implement an implementation of Collection.add in a different way, but all you know is the single point represented by the HashSet implementation. You have to guess where the fence is.

          That was the real trouble I had trying to integrate with JUnit as an API. I spent time attempting to decipher the JUnit source code and understood fairly well what it was doing. I made educated guesses at the contracts, but given the non-intuitive (to me) nature of the JUnit API, I wasn't confident in my guesses. I could have integrated my tool and tested it with JUnit 3.6 or 3.7, but had little faith that JUnit 3.8 would not prove my guesses wrong and break my tool.

    • Re:I tried too.. (Score:4, Insightful)

      by Pastis ( 145655 ) on Tuesday January 28, 2003 @03:00AM (#5172912)
      First, to answer somebody's else comment, the other guy who coded JUnit that nobody can't remember was Erich Gamma [slashdot.org]. If that name doesn't tell you anything, go back and read some books. It's impressive how the most important people 10 years ago can be forgotten so quickly. Makes you understand why Software is often called Software Craft.

      Open source software don't come up often with design documents. This one does. To those who say JUnit is hard to understand, go and read the Cook's tour [sourceforge.net]. Then you'll understand everything. Things have changed a little bit but not that much. Of course if you are not familiar with Design Patterns, I recommend to read Design Patterns [hillside.net] first. Yes that's by Erich Gamma and co also called the Gang of 4 [c2.com]

      I see allready people coming and say design pattern is crap, and that's a proof. But before coming in and complaining, remember the lessons learned by the people who have worked longer than us in this industry. KISS(Keep It Simple...) [techtarget.com] and don't reinvent the wheel: I am pretty sure that the elegant design used in JUnit could have been refactored quickly to do what those guys wanted without having to rewrite everything from scratch.

      The author of this new test suite says it all:

      Creating SuiteRunner was a huge amount of work. Despite my frustrations with JUnit, it would have been orders of magnitude easier to decipher JUnit's source code than to create a brand new testing toolkit.
  • by lurp ( 124527 ) on Tuesday January 28, 2003 @12:08AM (#5172318)
    It would be more accurate to say that they rewrote the junit test runners and not the framework itself. JUnit's framework has an extremely simple and clean design, and really doesn't need any changing. The design of the runners, on the other hand really sucks, as Kent Beck has admitted on a couple [sourceforge.net] of occasions [sourceforge.net].

    That said, I don't know if artima has really contributed anything new here. Your IDE likely has a JUnit test runner built into it already (IntelliJ, JBuilder, and NetBeans all do). Ant also already has decent junit test and report targets, which basically include all of the capabilities artima has implemented. Another stand alone test runner is probably not all that useful for day to day development.

    • by bvenners ( 550866 ) on Tuesday January 28, 2003 @12:48AM (#5172463)
      Actually, we rewrote JUnit's basic functionality. You can use SuiteRunner standalone to create and run unit and conformance tests. JUnit is not required.

      We made SuiteRunner a JUnit runner as an afterthought, when it dawned on us that no JUnit user would use SuiteRunner unless it added value to their existing JUnit test suites.

      One of the things we did originally was make "test" an abstract notion. Reporters can indicate a test is starting, succeeded, failed, or aborted. When it came time to turn SuiteRunner into a JUnit runner, we were able to just report JUnit test results as another kind of test. We felt the ease with which we could turn SuiteRunner into a JUnit runner was a validation of our abstract test notion.

  • and also, I don't have to put my email in the database of a company that may or may not honour their privacy policy when times get tough, when I download junit. Maybe I'm a little jaded about open source sites that look flashy and require emails to download...burnt by the whole 'jive' from coolservlets.com fiasco (making lot's of cash are you now boys...remember, you can't buy back your dignity). The simple concepts are often the best, junit works great. I see no real advantage in this one.
    • by bvenners ( 550866 ) on Tuesday January 28, 2003 @05:52AM (#5173252)

      As I mentioned elsewhere in this topic, I ask for registration so that I know who the users are. Even though SuiteRunner is free and open source, I consider it a product. If you check the Artima Newsletter checkbox, you'll be notified of updates to SuiteRunner as well as new articles on Artima.com.

      Also, for several reasons we decided to create a web-based discussion forum for SuiteRunner user support rather than using a mailing list. The web forum (I fear this will add weight to your conspiracy theory, but the forum is based on Jive.) requires that you have an account, so by getting an account when you download SuiteRunner, you are ready to post to the SuiteRunner users forum.

      On top of that, shouldn't I get points for requiring such a minute amount of information to register? Many times I have been confronted with a long scrolling page full of edit boxes I'm required to fill in. At Artima.com, all I ask for is a name, which you can fake, a nickname, which you can make up, and an email address that I confirm. The name and nickname shows up on forum posts. If you watch a forum topic you get notified by email when someone posts a reply, which is why I confirm the email address even if you opt out of the newsletter.

      Interesting that you found the site "flashy." I was going for simple. If you look carefully, you'll see the site is primarily text with a bare minimum of graphics.

      Nevertheless, although I incorporated Artima Software a couple years ago, it is still just one person (me). So what you really need to ask yourself is not whether you trust a faceless corporation with your email address, but whether you trust me [artima.com] with it. I assure you I am very concerned about privacy.

  • What's the big deal? (Score:3, Interesting)

    by melted ( 227442 ) on Tuesday January 28, 2003 @01:17AM (#5172581) Homepage
    Our QA people have developed their own framework for running tests in C#. It automatically discovers test cases via Reflection API, allows to group them into suites, run suites, generate reports, debug. It took 2 people 1.5 months to write it (while also dogfooding it to themselves and writing actual tests). No big fanfare, no buzzwords.

    "Refactoring" - holy fuck, where do you get such words?
    *
    • To get to know refactoring, I can refer to http://www.refactoring.com [refactoring.com] (by Martin Fowler).

      For those interested in the value of refactoring (whether it's merely a buzzword), I can refer to a research project of ours, at http://win-www.uia.ac.be/u/lore/refactoringProject [uia.ac.be]

      By the way: since the article does not go into behaviour-preserving restructuring of JUnit, they shouldn't mention 'refactoring' in the title.

    • I'd say the big deal is your guys spent 3 man months on something they could have just downloaded. There are 4 .Net/C# frameworks on this [xprogramming.com] list, and 7 on this [c2.com] one.

      The /. article describes Bill Venner's justification for writing his own; did your guys have a reason for what they did, or was it just "not invented here" syndrome?

      Another question is how you could spend as long as 3 months on this at all, xUnit isn't complicated...they've just pissed away roughly £15,000-£63,000 of your customers money (depending on if they were permies or conslutants).
  • by M.M.M. ( 147031 ) on Tuesday January 28, 2003 @06:21AM (#5173288) Homepage
    I have another one,
    it is geared towards tests with multithreaded environment.

    here is the project page
    http://www.michaelmoser.org/jmtunit/

    The library sets up a thread pool (n threads) and runs a number of identical test sequences; each test sequence is a virtual user (m users). All threads are waiting on one input queue for test requests. The threading modell is the same as in most of the popular application servers

    The programming interface is very similar to JUnit - there is a JMTUnit.TestCase with some additions, there is JMTUnit.TestResult.
    The tool does print out performance statistics

  • by PinglePongle ( 8734 ) on Tuesday January 28, 2003 @08:45AM (#5173721) Homepage
    Kent Beck - co-author of JUnit - recently published "Test-Driven Development", a book about how you can change your coding habits by creating the tests up front.

    One of the most useful sections is a complete walk-through of the creation of unit-testing tool for Python - it's probably the best way of understanding the internals of the JUnit framework without trying to reconstruct every single line of code...

  • Unfortunately, JUnit's graphical runner did not follow [JINI's] class loading model, and worked in a way that prevented class loading [...]

    I found the same problem when working with Ant and XML in JUnit, because the Ant system does its own class loader for the underlying XML parser, and my stuff got a LinkageError thrown when I tried to instantiate a parser, forcing me in a utility class to catch that error and call the good old fashioned "new" on the parser class within Xerces2.

You do not have mail.

Working...