Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Do Static Source Code Analysis Tools Really Work? 345

jlunavtgrad writes "I recently attended an embedded engineering conference and was surprised at how many vendors were selling tools to analyze source code and scan for bugs, without ever running the code. These static software analysis tools claim they can catch NULL pointer dereferences, buffer overflow vulnerabilities, race conditions and memory leaks. Ive heard of Lint and its limitations, but it seems that this newer generation of tools could change the face of software development. Or, could this be just another trend? Has anyone in the Slashdot community used similar tools on their code? What kind of changes did the tools bring about in your testing cycle? And most importantly, did the results justify the expense?"
This discussion has been archived. No new comments can be posted.

Do Static Source Code Analysis Tools Really Work?

Comments Filter:
  • In Short, Yes (Score:5, Informative)

    by Nerdfest ( 867930 ) on Monday May 19, 2008 @12:24PM (#23463724)
    They're not perfect, and won't catch everything, but they do work. Combined with unit testing, you can get a very low bug rate. Many of these (for Java, at least) are open source, so the expense in negligible.
    • Re: (Score:3, Interesting)

      by tritonman ( 998572 )
      yea they work, I wouldn't spend a lot of money on them though, a decent compiler will give you info for stuff like buffer overflows. Most of the bugs will need to be caught at runtime, so if you are on a tight budget, definitely skip the static tools for the more useful ones.
      • Re:In Short, Yes (Score:5, Informative)

        by Entrope ( 68843 ) on Monday May 19, 2008 @01:24PM (#23464430) Homepage

        My group at work recently bought one of these. They catch a lot of things that compilers don't -- for example, code like this:

        int array[4], count, ii;

        scanf("%d", &count);
        for (ii = 0; ii < count; ++ii)
        {
        scanf("%d", &array[ii]);
        }

        .. where invalid input causes arbitrarily bad behavior. They also tend to be better at inter-procedural analysis than compilers, so they can warn you that you're passing a short literal string to a function that will memcpy() from the region after that string. They do have a lot of false positives, but what escapes from compilers to be caught by static analysis tools tend to be dynamic behavior problems that are easy to overlook in testing. (If the problem were so obvious, the coder would have avoided it in the first place, right?)

        • by neokushan ( 932374 ) on Monday May 19, 2008 @07:38PM (#23468880)
          I hope you realise I just spent a good 2mins googling around for an explanation of a for loop with 4 parts to it instead of the 3 I was used to seeing. I genuinely thought it was some special, relatively unknown and underused part of the C spec that I'd just not seen before.
          Then I realised it was just the HTML screwing up a less-than symbol. Then I felt a bit silly.
          Then I just had to tell someone....
    • Re:In Short, Yes (Score:4, Informative)

      by crusty_yet_benign ( 1065060 ) on Monday May 19, 2008 @12:43PM (#23463964)
      In my experience developing win/mac x-platform apps, Purify (Win), Instruments (OSX) and BoundsChecker (Win) have all been useful. They find obvious stuff, that might have led to other issues. Recommended.
      • Re: (Score:3, Informative)

        by Simon80 ( 874052 )
        There's also valgrind [valgrind.org], for Linux users, and mudflap [gnu.org], for gcc users. I haven't tried mudflap yet, but valgrind is a very good runtime memory checker, and mudflap claims to do similar things.
        • Re:In Short, Yes (Score:5, Informative)

          by HalWasRight ( 857007 ) on Monday May 19, 2008 @01:54PM (#23464758) Journal
          valgrind, BoundsChecker, and I believe the others mentioned, are all run-time error checkers. These require a test case that execises the bug. The static analysis tools the poster was asking about, like those from Coverity [coverity.com] and Green Hills [ghs.com], don't need test cases. They work by analyzing the actual semantics of the source code. I've found bugs with tools like these in code that was hard enough to read that I had to write test cases to verify that the tool was right. And it was! The bug would have caused an array overflow write under the right conditions.
          • Re: (Score:3, Interesting)

            by synthespian ( 563437 )
            Don't forget Polyspace [mathworks.com], which I personally have never used - but I would love to. Polyspace is a Standard ML [wikipedia.org] (a higher-order functional programming language) success story, because it relies heavily on the SML MLton [mlton.org] compiler. Another thing that makes it a success story is the fact the Mathworks [mathworks.com] (makers of Matlab) bought it.

            The C/C++ version does MISRA-C [misra-c2.com] (the C used in the automotive industry) too.

            There's also a version for Ada [wikipedia.org], of course.
        • by jberryman ( 1175517 ) on Monday May 19, 2008 @02:24PM (#23465142)

          There's also valgrind, for Linux users

          It's great for finding all those elusive bits of code that might be accidentally seeding a pseudo-random number generator somewhere.

    • In short, YMMV (Score:5, Informative)

      by Moraelin ( 679338 ) on Monday May 19, 2008 @01:05PM (#23464230) Journal
      My experience has been that while in the hands of people who know what they're doing, they're a nice tool to have, well, beware managers using their output as metrics. And beware even more a consultant with such a tool that he doesn't even understand.

      The thing is, these tools produce

      A) a lot of "false positives", code which is really OK and everyone understand why it's ok, but the tool will still complain, and

      B) usually includes some metrics of dubious quality at best, to be taken only as a signal for a human to look at it and understand why it's ok or not ok.

      E.g., ne such tool, which I had the misfortune of sitting through a salesman hype session of, seemed to be really little more than a glorified grep. It really just looked at the source text, not at what's happening. So for example if you got a database connection and a statement in a "try" block, it wanted to see the close statements in the "finally" block.

      Well, applied to an actual project, there was a method which just closed the connection and the statements supplied as an array. Just because, you know, it's freaking stupid to copy-and-paste cute little "if (connection != null) { try { connection.close(); } catch (SQLException e) { // ignore }}" blocks a thousand times over in each "finally" block, when you can write it once and just call the method in your finally block. This tool had a trouble understanding that it _is_ all right. Unless it saw the "connection.close()" right there, in the finally block, it didn't count.

      Other examples include more mundane stuff like the tools recommending that you synchronize or un-synchronize a getter, even when everyone understands why it's OK for it to be as it is.

      E.g., a _stateless_ class as a singleton is just an (arguably premature and unneded) speed optimization, because some people think they're saving so much by a singleton instead of the couple of cycles it takes to do a new on a class with no members and no state. It doesn't really freaking matter if there's exactly one of it, or someone gets a copy of it. But invariably the tools will make an "OMG, unsynchronized singleton" fuss, because they don't look deep enough to see if there's actually some state that must be unique.

      Etc.

      Now taken as something that each developper understands, runs on his own when he needs it, and uses his judgment of each point, it's a damn good thing anyway.

      Enter the clueless PHB with a metric and chart fetish, stage left. This guy doesn't understand what those things are, but might make it his personal duty to chart some progress by showing how much fewer warnings he's got from the team this week than last week. So useless man-hours are spent on useless morphing perfectly good code, into something that games the tool. For each 1 real bug found, there'll be 100 harmless warnings that he makes it his personal mission to get out of the code.

      Enter the snake-oil vendor's salesman, stage right. This guy only cares about selling some extra copies to justify his salary. He'll hype to the boss exactly the possibility to generate such charts (out of mostly false positives) and manage by such charts. If the boss wasn't already in a mind to do that management anti-pattern, the salesman will try to teach him to. 'Cause that's usually the only advantage that his expensive tool has over those open source tools that you mention.

      I'm not kidding. I actually tried to corner one into;

      Me: "ok, but you said not everything it flags there is a bug, right?"

      Him: "Yes, you need to actually look at them and see if they're bugs or not."

      Me: "Then what sense does it make to generate charts based on wholesale counting entities which may, or may not be bugs?"

      Him: "Well, you can use the charts to see, say, a trend that you have less of them over time, so the project is getting better."

      Me: "But they may or may not be actual bugs. How do you know if this week's mix has more or less actual bugs than last weeks, regardless of wh
      • As I understand it these things work but turning the code in to a state diagram. It maps out the code by stepping through it and replacing variables that have a values with sets of possible values. So if the code has the user enter a value for the int X then the sate of X is now what ever the user was permitted to enter. Then when the code hits an "if(X > 10)" statement the map branches so that on one branch it the state is like the IF executed and on the other it is like it did not.

        The problem is that
      • Re:In short, YMMV (Score:4, Interesting)

        by JaneTheIgnorantSlut ( 1265300 ) on Monday May 19, 2008 @02:26PM (#23465156)
        Also beware of managers who insist that each item identified by the tool needs to be somehow addressed. I inherited a body of code full of comments to the effect that "the tool says this is a a problem, but I looked at it and it is not".
      • Re:In short, YMMV (Score:5, Insightful)

        by TemporalBeing ( 803363 ) <bm_witness.yahoo@com> on Monday May 19, 2008 @02:31PM (#23465226) Homepage Journal

        Enter the clueless PHB with a metric and chart fetish, stage left. This guy doesn't understand what those things are, but might make it his personal duty to chart some progress by showing how much fewer warnings he's got from the team this week than last week. So useless man-hours are spent on useless morphing perfectly good code, into something that games the tool. For each 1 real bug found, there'll be 100 harmless warnings that he makes it his personal mission to get out of the code.
        I've found that eliminating compiler warnings will do a lot for finding bugs. Sure, there may be a number of "harmless" ones, but cleaning them up will still do a lot of good to the code too, and make the other not-so-harmless ones stand out even more. It also gives good practice for resolving the issues so that you become more proactive than reactive to bugs in the code. Just 2 cents.
        • Re:In short, YMMV (Score:5, Informative)

          by Moraelin ( 679338 ) on Monday May 19, 2008 @04:47PM (#23467072) Journal
          Compiler warnings, yes, at least for a decent warning level.

          Going out of the way to satisfy a tool, whose only reason to exist is to flag 10 times more stuff than -Wall, I found actually counter-productive.

          And I don't mean just as in, WOMBAT (Waste Of Money Brains And Time.) I mean as in: it teaches people to game the tool, actually hiding their real bugs. And it creates a false sense of security too.

          I've actually had to deal with a program which tested superbly on most metrics of such a tool. But only because the programmer had learned to game it. The program was really an incoherent and buggy mess. But it gamed every single tool they had in use.

          A. to start with the most obvious, some bright guy there had come up with an own CVS script which didn't let you check in, unless you had commented every single method, and every single parameter and exception thrown. Bout damn time, eh? Wrong.

          1. This forced people to effectively overwrite the comments inherited from better documented stuff. E.g., if you had a MyGizmoInterface interface, which was superbly documented, and the MyGizmoImpl class implementing it, it forced you to copy and paste the JavaDoc comments instead of just letting JavaDoc pick them from the interface. So instead of seeing the real docs, now everyone had docs all over the place along the lines of "See MyGizmoInterface.gizmoMethod()" overwriting the actually useful ones there. Or some copied and pasted comments from 1 year ago, where one of the two gradually became obsolete. People would update their comments in one of the two, but let the other say something that wasn't even true any more. Instead of having them in one place, and letting JavaDoc copy them automatically.

          2. The particular coder of this particular program, had just used his counter-script or maybe plugin, to automatically generate crap like:

          /**
            * Method description.
            *
            * @param x method parameter
            * @param y method parameter
            * @param idx method parameter
            * @param str method parameter
            */
          I mean, _literally_. Hundreds of methods had "Method description" as their javadoc comment, and thousands of parameters total were described as "method parameter."

          B. It also included such... brain-dead metrics as measuring the cohesion of each class, by the ratio between number of class members to class methods.

          He had learned to game that too. His code tested as superbly cohesive, although the same class and indeed the same method, could either send an email, or render a PDF, or update an XML in the database, depending on which parameters they got. But the members to methods ratio was grrrreat.

          That's really my problem with it:

          A. Somewhere along the way, they had become so confident in their tools, that noone actually even checked what javadoc comments those classes have. Their script already checks that there are comments, hey, that's enough.

          B. Somewhere along the way, everyone had gotten used to just gaming a stupid tool. If the tool said you have too many or too few class members, you'd just add or remove some to keep it happy. If it complained about complexity, because it considered a large switch statement to have too many effective ifs, you just split it into a several functions: one testing cases 1 to 10, one testing 11 to 20, and so on. Which actually made the code _less_ readable, and generally lower quality. There would have been ways to solve the problems better, but, eh, all that mattered was to keep the tool happy, so noone bothered.

          That's why I'd rather not turn it into a religion. Use the tool, yes, but take it as just something which you need to check and use your own judgment. Don't lose track of which is the end, and which is merely a means to that end.
          • Re: (Score:3, Informative)

            by Allador ( 537449 )
            Wow, nice story. Quite amazing, as you'd think the devs could just do things the right way for not much more work than gaming the tools.

            This kind of thing though, is ultimately a failure of management, whoever leads/runs the dev team. They should be able to see this kind of thing happening and either apply some proper motivation, change the procedures, or let some bad devs go.

            Mind you, bad developers as well. But if I were the owner, the dev mgr would get the brunt first on something like this.
      • by Ungrounded Lightning ( 62228 ) on Monday May 19, 2008 @02:36PM (#23465264) Journal
        Me: "ok, but you said not everything it flags there is a bug, right?"
        Him: "Yes, you need to actually look at them and see if they're bugs or not."
        Me: "Then what sense does it make to generate charts based on wholesale counting
                    entities which may, or may not be bugs?"
        Him: "Well, you can use the charts to see, say, a trend that you have less
                  of them over time, so the project is getting better."
        Me: "But they may or may not be actual bugs. How do you know if this week's
                  mix has more or less actual bugs than last weeks, regardless of what the
                  total there is?"
        Him: "Well, yes, you need to actually look at them in turn to see which are actual bugs."
        Me: "But that's not what the tool counts. It counts a total which includes an
                    unknown, and likely majority, number of false positives."
        Him: "Well, yes."
        Me: "So what use is that kind of a chart then?"
        Him: "Well, you can get a line or bar graph that shows how much progress
                  is made in removing them."

        Your next line is:

        Me: "So you're selling us a tool that generates a lot of false warnings
                  and a measurement on how much unnecessary extra work we've done to
                  eliminate the false warnings. Wouldn't it make more sense not to use
                  the tool in the first place and spend that time actually fixing real bugs?"

        To work this question must be asked with the near-hypnotized manager watching.
        • by Allador ( 537449 ) on Monday May 19, 2008 @07:26PM (#23468790)
          The 'me' in this case is missing the point.

          You dont just run the tool over and over again and never adapt it to your code.

          If it produces a bunch of false positives, then you go in and modify the rules to not generate those false positives.

          Thats half the point of something like this, you need to tune it to your project.

          The flip side is that if you see some devs over and over making the same kind of mistake, well you can write a new rule in it to flag that kind of thing.

          If you have an endless number of false positives, that doesnt ever go down, then you are either:

          1. Not using the tool correctly.
          or
          2. Not working on a project that is amenable to this tool.

          IME, the vast majority of time its #1. Now you may find that for certain small or narrowly scoped projects, or those worked on by 2 super-gurus, that the overhead for learning and tuning the tool for that project isnt worth it. But thats something you'd have to find out yourself, and it differs from project to project.
    • Re:In Short, Yes (Score:5, Interesting)

      by FBSoftware ( 1224962 ) on Monday May 19, 2008 @01:13PM (#23464308)
      Yes, I use the formal methods based SPARK tools (www.sparkada.com) for Ada software. In my experience, the Examiner (static analyzer) is always right (> 99.44% of the time) when it reports a problem or potential for runtime exception. Even without SPARK, the Ada language requires that the compiler itself accomplish quite a bit of static analysis. Using Ada, its less likely you will need third-party static analysis tool - just use a good compiler like GNAT.
    • Re: (Score:3, Interesting)

      by samkass ( 174571 )
      Due to its dynamic nature and intermediate bytecode, Java analysis tools seem to be especially adept at catching problems. In essence, they can not only analyze the source better (because of Java's simpler syntax), they can also much more easily analyze swaths of the object code and tie it to specific source file issues.

      In particular, I've found FindBugs has an amazing degree of precision considering it's an automated tool. If it comes up with a "red" error, it's almost certainly something that should be
  • Yes. (Score:4, Insightful)

    by Dibblah ( 645750 ) on Monday May 19, 2008 @12:24PM (#23463726)
    It's another tool in the toolbox. However, the results are not necessarily easy to understand or simple to fix. For example, see the recent SSL library issue - Which exhibited minimal randomness due to someone "fixing" an (intended) uninitialized memory area.
    • Re: (Score:2, Insightful)

      by mdf356 ( 774923 )
      If you're using uninitialized memory to generate randomness, it wasn't very random in the first place.

      Not that I actually read anything about the SSL "fix".
    • Re:Yes. (Score:5, Informative)

      by Anonymous Coward on Monday May 19, 2008 @12:37PM (#23463884)
      Sigh. That bug wasn't from fixing the use of uninitialized memory, it was from being overzealous and "fixing" a second (valid, not flagged as bad by Valgrind) use of the the same function somewhere near the first use.
    • Re: (Score:3, Insightful)

      by moocat2 ( 445256 )
      Assuming you are talking about the SSL issue in Debian - the original 'issue' they tried to fix was reported by Valgrind. Valgrind is a run-time analysis tool.

      While the parent makes a good point that results are not always easy to understand or fix - since the original post is about static vs run-time analysis tools, it's good to understand that they each have their problems.
  • by mdf356 ( 774923 ) <mdf356@gma[ ]com ['il.' in gap]> on Monday May 19, 2008 @12:24PM (#23463734) Homepage
    Here at IBM we have an internal tool from research that does static code analysis.

    It has found some real bugs that are hard to generate a testcase for. It has also found a lot of things that aren't bugs, just like -Wall can. Since I work in the virtual memory manager, a lot more of our bugs can be found just by booting, compared to other domains, so we didn't get a lot of new bugs when we started using static analysis. But even one bug prevented can be work multiple millions of dollars.

    My experience is that, just like enabling compiler warnings, any way you have to find a bug before it gets to a customer is worth it.
  • OSS usage (Score:5, Insightful)

    by MetalliQaZ ( 539913 ) on Monday May 19, 2008 @12:25PM (#23463742)
    If I remember correctly, one of these companies donated their tool to many open source projects, including Linux and the BSDs. I think it led to a wave of commits as 'bugs' were fixed. It seemed like a pretty good endorsement to me...
  • by MadShark ( 50912 ) on Monday May 19, 2008 @12:29PM (#23463802)
    I use PC-lint religiously for my embedded code. In my opinion it has the most bang for the buck. It is fast, cheap and reliable. I've found probably thousands of issues and potential issues over the years using it.

    I've also used Polyspace. In my opinion, it is expensive, slow, can't handle some constructs well and has a *horrible* signal to noise ratio. There is also no mechanism for silencing warnings in future runs of the tool(like the -e flag in lint). On the other hand, it has caught a (very) few issues that PC-Lint missed. Is it worth it? I suppose it depends if you are writing systems that can kill people if something goes wrong.
    • I've also used Polyspace. In my opinion, it is expensive, slow, can't handle some constructs well and has a *horrible* signal to noise ratio.

      The signal-to-noise ratio is pretty horrendous in most static analysis tools for C and C++, IME. This is my biggest problem with them. If I have to go through and document literally thousands of cases where a perfectly legitimate and well-defined code construct should be allowed without a warning because the tool isn't quite sure, I rapidly lose any real benefit and everyone just starts ignoring the tool output. Things like Lint's -e option aren't much good as workarounds either, because then even if you'

      • That is the problem for a beginner. When you first configure PC-Lint you need to tune the configuration to ignore stuff that you don't have a problem with, ie. assignments within a test. After than you need to configure your project for lint, setting up the lint files to include the correct headers and such. Then the noise is not too bad. Just make sure when you think something is noise, it is not really noise.
        • Re:signal to noise (Score:4, Insightful)

          by McGregorMortis ( 536146 ) on Monday May 19, 2008 @02:11PM (#23465002)
          If you're tuning it to ignore assignment within a test , ie "if( x=y ) {}", then you're missing the one of the great points of using PC-Lint.

          That code is simply in poor taste, even if it works. What PC-Lint, and good taste, say you should do is change the code to "if( (x=y) != 0 ) {}". This will satisfy PC-Lint, and also makes your intention very clear to the next programmer who comes along. And, best of all, it doesn't generate a single byte of extra code, because you've only made explicit what the compiler was going to do anyway.
    • Re: (Score:3, Funny)

      by somersault ( 912633 )

      I suppose it depends if you are writing systems that can kill people if something goes wrong.
      The best way to avoid bugs in that case would be for the developers to test the systems themselves - then they'd be a lot more careful! Plus it helps natural selection to weed out the sloppy coders :) In that case you'd probably want to write all the code from scratch though to ensure that nobody else's bugs kill you.
    • by sadr ( 88903 )
      Here's another vote for PC-Lint by http://www.gimpel.com/ [gimpel.com]

      You really can't beat it for the money, and it is probably as comprehensive as some of the other more expensive products for C and C++.

  • They do work (Score:5, Interesting)

    by Anonymous Coward on Monday May 19, 2008 @12:29PM (#23463804)
    Static analysis does catch a lot of bugs. Mind you, it's no silver bullet, and frankly it's better, given the choice, to target a language+environment that doesn't suffer problems like dangling pointers in the first place (null pointers, however, don't seem to be anything Java or C# are really interested in getting rid of).

    Even lint is decent -- the trick is just using it in the first place. As for expense, if you have more than, oh, 3 developers, they pay for themselves by your first release. Besides, many good tools such as valgrind are free (valgrind isn't static, but it's still useful).
  • Yes (Score:4, Informative)

    by progressnerd ( 1189895 ) on Monday May 19, 2008 @12:30PM (#23463810)
    Yes, some static analysis tools really work. FindBugs [sourceforge.net] works well for Java. Fortify [fortify.com] has had good success finding security vulnerabilities. These tools take static checking just a step beyond what's offered by a compiler, but in practice that's very useful.
    • Re: (Score:3, Interesting)

      by Pollardito ( 781263 )

      These tools take static checking just a step beyond what's offered by a compiler, but in practice that's very useful.
      that's a good point in that compiler warnings and errors are really just the result of static analysis, and i think everyone has experience in finding bugs due to those
  • The best thing these tools can do is to tell everyone what they probably already know -- that a particular coder or coder(s) are responsible for a whole ton of the errors in the code. I think it'd be much better to move that coder to some other part of the company ... it would be way cheaper than trying to fix all their bugs.
  • by wfstanle ( 1188751 ) on Monday May 19, 2008 @12:32PM (#23463824)
    I am presently working on an update to static analysis tools. Static analysis tools are not a silver bullet but they are still relevant. Look at them as a starting point in your search for programming problems. A lot of potential anomalies can be detected like the use of uninitialized variables. Of course, a good compiler can use these tools as part of the compilation process. However, there are many things that a static analyzer can't detect. For this, you need some way to do dynamic analysis ( execution based testing). As such the tools we are developing also include dynamic testing.
  • Yes, they work. (Score:5, Insightful)

    by Anonymous Coward on Monday May 19, 2008 @12:32PM (#23463832)
    You will probably be amazed at what you will catch with static analysis. No, it's not going to make your program 100% bug-free (or even close), but every time I see code dies on an edge case that would've been caught with static analysis, it makes me want to kill a kitten (and I'm totally a "cat person" mind you).

    Static analyzers will catch the stupid things - edge cases that fail to initialize a var, but then lead straight to de-referencing it; memory leaks on edge-case code paths, etc. that shouldn't happen but often do, and get in the way of find real bugs in your program logic.
  • by Idaho ( 12907 ) on Monday May 19, 2008 @12:33PM (#23463840)
    Such tools work in a very similar way to what is already being done in many modern language compilers (such as javac). Basically, they implement semantic checks that verify whether the program makes sense, or is likely to work as intended in some respect. For example, they will check for likely security flaws, memory management/leaking or synchronisation issues (deadlock, access to shared data outside critical sections, etc.), or other kind of checks that depend on whatever domain the tool is intended for.

    It would probably be more useful if you could state which kind of problem you are trying to solve and which tools you are considering to buy. That way, people who have experience with them could suggest which work best :)
    • by Yvanhoe ( 564877 )
      It looks like you are reading memory from unallocated memory space, you should comment line 427 in ssl_seed_generator.h

      I also see how it could bring a distribution to its knees. But I agree that they will probably be worthwhile 90% of the time.
      • Re: (Score:3, Insightful)

        by mrogers ( 85392 )
        Depends on whether one interprets "you should comment" as "you should document" or "you should comment out", I guess. :-)
  • Testing cycle (Score:5, Informative)

    by mdf356 ( 774923 ) <mdf356@gma[ ]com ['il.' in gap]> on Monday May 19, 2008 @12:33PM (#23463856) Homepage
    I forgot to answer your other question.

    Since we've had the tool for a while and have fixed most of the bugs it has found, we are required to run static analysis on new code for the latest release now (i.e. we should not be dropping any new code that has any error in it found via static analysis).

    Just like code reviews, unit testing, etc., it has proved useful and was added to the software development process.
  • by BigBlueOx ( 1201587 ) on Monday May 19, 2008 @12:34PM (#23463862)
    Ya can't beat a good "Lint party" after all the testing is done! You'll find all kinds of cool stuff that slipped through your testing suites.

    However, static code analysis is just one part of the bug-finding process. For example, in your list, in my limited experience, I have found that buffer overflows and NULL pointer derefs get spotted really well. Race conditions? Memory leaks? Hmm. Not so good.

    YMMV. Don't expect magic. Oh to hellwithit, just let the end-users test it *ow!*
  • Yes. (Score:4, Informative)

    by Anonymous Coward on Monday May 19, 2008 @12:35PM (#23463868)
    The Astrée [astree.ens.fr] static analyser (based on abstract interpretation) proved the absence of run-time errors in the primary control software of the Airbus A380.
    • It may be the best tool in the world - I admit I do not know it. But the word "proved" makes me suspicious. To me this sounds like the typical - and wide spread - management speak to make business decision makers and their insurrers sleep well. Thank you! This gives the perfekt example that the misleading wording is even used by educational bodies.
      Is this is a proof or do some mistakenly think they're safe?

      Who "proved" Astree to be error free in the first place?!
      • Who "proved" Astree to be error free in the first place?!

        The creators of Astrée, presumably. Proving in the scientific sense that a piece of software is correct can definitely done, it's just really expensive most of the time. In any case they claim that Astrée is sound, i.e. catches all errors, but that the precision can be adjusted to reduce or increase the number of false positives, depending on how much time you have. The A380 fly-by-wire analysis was apparently the first case where no false positives were reported (and no true positives either, of

      • Re: (Score:3, Informative)

        by anpe ( 217106 )
        It's not "error free", it's _run-time error free_. Which according to the GP's link means that no undefined behaviour according to the C standard or user added asserts may happen.
        So for example, the program won't ever divide by zero or overflow an integer while summing.
  • Yes (Score:5, Informative)

    by kevin_conaway ( 585204 ) on Monday May 19, 2008 @12:36PM (#23463876) Homepage

    Add me to the Yes column

    We use them (PMD and FindBugs) for eliminating code that is perfectly valid, yet has bitten us in the past. Two Java examples are unsynchronized access to a static DateFormat object and using the commons IOUtils.copy() instead of IOUtils.copyLarge().

    Most tools are easy to add to your build cycle and repay that effort after the first violation

  • MIT Site (Score:4, Interesting)

    by yumyum ( 168683 ) on Monday May 19, 2008 @12:38PM (#23463890)
    I took a very cool graduate-level class at MIT from Dr. Michael Ernst [mit.edu] about this very subject. Check out some of the projects listed at http://groups.csail.mit.edu/pag/ [mit.edu].
  • Very useful in .Net (Score:3, Interesting)

    by Toreo asesino ( 951231 ) on Monday May 19, 2008 @12:39PM (#23463904) Journal
    While not the be-all-and-end-all of code quality metrics, VS2008/Team Foundation Server has this built-in now so you can stop developers checking in completely junk code if you so wish - http://blogs.msdn.com/fxcop/archive/2007/10/03/new-for-visual-studio-2008-code-metrics.aspx [msdn.com]

    FxCop too has gone server-side too (for those familiar with .Net development). It takes one experienced dev to customise the rules, and you've got a fairly decent protection scheme against insane code commits.
    • Re: (Score:3, Insightful)

      by SpryGuy ( 206254 )
      If you're doing C# development, you should really check out JetBrains "ReSharper". Version 4.0 is due out soon, which supports all the C#3.0 syntaxes (extension methods, LINQ, lambda expressions, etc), but even 3.x is a worthwhile tool. It does real-time syntax checking (as you type) so you don't have to compile to find out you have a syntax error, as well as tons of refactorings, and very useful static code analysis.

      Once you develop with Resharper, you really can't go back to using VS without it... it's
  • Yes, static code tools do work well for finding certain classes of issues. However, they are not a panacea. They do not understand the semantics that are intended and cannot effectively replace code reviews.
  • by BrotherBeal ( 1100283 ) on Monday May 19, 2008 @12:40PM (#23463914)
    the more they stay the same. Static code analysis tools are just like smarter compilers, better language libraries, new-and-improved software methodologies, high-level dynamic languages, modern IDE's, automated unit test runners, code generators, document tools and any number of other software tools that have shown up over the past few decades.

    Yes, static code analysis can help improve a team's ability to deliver a high-quality product, if it is embraced by management and its use is enforced. No, it will not change the face of software development, nor will it turn crappy code into good code or lame programmers into geniuses. At best, when engineers and management agree this is a useful tool, it can do almost all the grunt work of code cleanup by showing exactly where problem code is and suggesting extremely localized fixes. At worst, it will wind up being a half-assed code formatter since nobody can agree on whether the effort is necessary.

    Just like all good software-engineering questions, the answer is 'it depends'.
  • by RJCantrell ( 1290054 ) on Monday May 19, 2008 @12:40PM (#23463918)
    In my own corner of the world (.NET Compact Framework 2.0 on old, arcane hardware), they certainly don't. Each time I get optimistic and search for new or previously-missed static analysis tools, all roads end up leading to FxCop. Horrible signal-to-noise ratio, and a relatively small number of real detectable problems. That said, I'm always willing to submit myself to the genius of the slashdot masses. If you know of a great one, feel free to let me know. = )
    • by FishWithAHammer ( 957772 ) on Monday May 19, 2008 @12:59PM (#23464156)
      I believe Gendarme [mono-project.com] might be of some use. Just don't invoke the Portability assemblies and I can't see why it'd fail.
    • by boatboy ( 549643 )
      I've never had problems with FxCop - it catches tons of common and not-so common real-world errors, lets you turn off rules you don't care about, and links to usually helpful MSDN articles to explain the rules. I'd say everything in the Security, and Design categories are well worth the few minutes it takes to run a free tool. Having it baked into VS08 is even better, but I was a big fan of it with 05/2.0 as well.
  • by Chairboy ( 88841 ) on Monday May 19, 2008 @12:41PM (#23463938) Homepage
    At Symantec, I used to use these tools to help plan tests. I wrote a simple code velocity tool that monitored Perforce checkins and generated code velocity graphs and alerts in different components as time passed. With it, QA could easily see which code was being touched the most and dig down to the specific changelists and see what was going on. It really helped keep good visibility on what needed the most attention and helped everyone avoid being 'surprised' by someone dropping a bunch of changes into an area that wasn't watched carefully. During the final days of development before our products escaped to manufacturing, this provided vital insight into what was happening.

    I've since moved on, and I think the tool has since gone offline, but I think there's a real value to doing static analysis as part of the planning for everything else.
  • Coverity & Klocwork (Score:5, Informative)

    by Anonymous Coward on Monday May 19, 2008 @12:50PM (#23464056)
    We have had presentations from both Coverity and Klocwork at my workplace. I'm not entirely fond of them, but they're wayyyyy better than 'lint'. :) I much prefer using "Purify" whenever possible, since run-time analysis tends to produce fewer false-positives.

    My comments would be:

    (1) Klockwork & Coverity tend to produce a lot of "false positives". And by a lot, I mean, *A LOT*. For every 10000 "critical" bugs reported by the tool, only a handful may be really worth investigating. So you may spend a fair bit of time simply weeding through what is useful and what isn't.

    (2) They're expensive. Coverity costs $50k for every 500k lines of code per year... We have a LOT more code than this. For the price, we could hire a couple of guys to run all of our tools through Purify *and* fix the bugs they found. Klocwork is cheaper; $4k per seat, minimum number of seats.

    (3) They're slow. It takes several days running non-stop on our codebase to produce the static analysis databases. For big projects, you'll need to set aside a beefy machine to be a dedicated server. With big projects, there will be lots of bug information, so the clients tend to get bogged down, too.

    In short: It all depends on how "mission critical" your code is; is it important, to you, to find that *one* line of code that could compromise your system? Or is your software project a bit more tolerant? (e.g., If you're writing nuclear reactor software, it's probably worthwhile to you to run this code. If you're writing a video game, where you can frequently release patches to the customer, it's probably not worth your while.)
    • Re: (Score:3, Interesting)

      by EPAstor ( 933084 )

      I did some work running Coverity for EnterpriseDB, against the PostgreSQL code base (and yes, we submitted all patches back, all of which were committed).

      Based on my experience:

      1) Yes, Coverity produced a LOT of false positives - a few tens of thousands for the 20-odd true critical bugs we found. However, the first step in working with Coverity is configuring it to know what can be safely ignored. After about 2 days of customizing the configuration (including points where I could configure it to under

  • Trends or Crutches? (Score:4, Interesting)

    by bsDaemon ( 87307 ) on Monday May 19, 2008 @12:51PM (#23464066)
    I'll probably get modded to hell for asking but seriously -- all these new trends, tools, etc - are they not just crutches, which in the long run are seriously going to diminish the quality of output by programmers?

    For instance, we put men on the moon with a pencil and a slide rule. Now no one would dream of taking a high school math class with anything less than a TI-83+.

    Languages like Java and C# are being hailed while languages like C are derided and many posts here on slashdot call it outmoded and say it should be done away with, yet Java and C# are built using C.

    It seems to me that there is no substitute for actually knowing how things work at the most basic level and doing them by hand. Can a tool like Lint help? Yes. Will it catch everything? Likely not.

    As generations of kids grow up with the automation made by generations who came before, and have less incentive to learn how the basic tools work, an incentive which will diminish, approaching 0, I think we're in for something bad.

    As much as people bitch about kids who were spoiled by BASIC, you'd think that they'd also complain about all the other spoilers. Someday all this new, fancy stuff could break and someone who only knows Java, and even then checks all their source with automated tools will likely not be able to fix it.

    Of course, this is more of just a general criticism and something I've been thinking about for a few weeks now. Anyway, carry on.
    • by Lord_Frederick ( 642312 ) on Monday May 19, 2008 @01:08PM (#23464256)
      Any tool can be considered a "crutch" if it's misused. I don't think anyone that put men on the moon would want to return to sliderules, but a calculator is only a crutch if the user doesn't understand the underlying fundamentals. Debugging tools are just tools until they stop simply performing tedious work and start doing what the user is not capable of understanding.
      • Re: (Score:3, Interesting)

        by bsDaemon ( 87307 )
        In the 7th grade I left my calculator at home one day when I had a math test. I did, however, have a Jepsen Flight Computer (a circular slide rule) that my dad (a commercial airline pilot) had given me, because I was going to a flying lesson after school.

        I whipped out my trusty slide rule and commenced to using it. The teacher wanted to confiscate it and thought that I was cheating with some sort of high-tech device... mind you it was just plastic and cardboard. I'm sure you've all seen one before.

        I'm on
    • Re: (Score:2, Insightful)

      by flavor ( 263183 )
      Did you walk uphill in the snow, both ways, when you were a kid, too? At one point in time, high-level languages like ASSEMBLER were considered crutches for people who weren't Real Programmers [catb.org]. Get some perspective!

      Look, people make mistakes, and regardless of how good a programmer you are, there is a limit to the amount of state you can hold in your head, and you WILL dereference a NULL pointer, or create a reference loop, at some point in your career.

      Using a computer to catch these errors is just another
    • by gmack ( 197796 ) <gmack@noSpAM.innerfire.net> on Monday May 19, 2008 @01:39PM (#23464596) Homepage Journal
      These tools require skill. Blindly fixing things that Lint shows up can introduce new bugs or conversely using lint notation to shut the warnings off can mask bugs.

      I also don't think new languages help bad programmers much. Bad code is still bad code so now instead of crashing it will just memory leak or just not work right.

      On a software project I worked on before our competition spent two years and two million dollars did their code in visual basic and MSSQL and they abandoned their effort when no matter what hardware they threw at it they couldn't get their software to handle more than 400 concurrent users. We did our project in C and with a team for 4 built something in about a year that handled 1200 users on a quad CPU P III 400mhz Compaq. Even when another competitor posed as a client and borrowed some of my ideas (they added a comms layer instead of using the SQL server for communication) they still required a whole rack of machines to do what we did with one out of badly out of date test machine.

      C is a fine tool if you know how to use it so I doubt it will go away any time soon.
  • To a degree, yes (Score:5, Interesting)

    by gweihir ( 88907 ) on Monday May 19, 2008 @12:53PM (#23464098)
    You actually need to tolerate a number of false positives in order to get good coverage of the true bugs. That means you have to follow-up on every report in detail and understand it.

    However these things do work and are highly recommended. If you use other advanced techniques (like Descign by Contract),they will be a lot less useful though. They are best for traditional code that does not have safety-nets (i.e. most code).

    Stay away from tools that do this without using your compiler. I recently evaluated some static analysis tools found that the tools that do not use the native compilers can have serious problems. One example was an incorrecly set symbol in the internal compiler of one tool, that could easily change the code functionality drastically. Use tools that work frrom a build environment and utilize the compiler you are using to build.
  • by SpryGuy ( 206254 ) on Monday May 19, 2008 @12:57PM (#23464138)
    I've used the above two tools ... the IntelliJ IDEA IDE for Java development, and the Visual Studio plug-in Resharper for C# development ... and can't imagine living without them.

    Of course, they provide a heck of a lot more than just static code analysis, but the ability to see all syntax errors in real time, and all logic errors (like potential null-references, dead code, unnecessary 'else' statements, etc, etc) saves way too much time, and has, in my experience, resulted in much better, more solid code. When you add on all the intelligent refactoring, vastly improved code navigation, and customizable code-generation features of these utilities, it's a no-brainer.

    I wouldn't program without them.
  • Yes, absolutely (Score:5, Informative)

    by Llywelyn ( 531070 ) on Monday May 19, 2008 @01:02PM (#23464190) Homepage

    FindBugs is becoming increasingly widespread on Java projects, for example. I found that between it and JLint I could identify a substantial chunk of problems caused by inexperienced programmers, poor design, hastily written code, etc. JLint was particularly nice for potential deadlocks, while FindBugs was good for just about everything else.

    For example:

    • Failure to make null checks.
    • Ignoring exceptions
    • Defining equals() but not hashCode() (and the other variations)
    • Improper use of locks.
    • Poor or inconsistent use of synchronization.
    • Failure to make defensive copies.
    • "Dead stores."
    • Many others [sourceforge.net]

    At least in the Java world, I wish more people would use them. It would make my job so much easier.

    My experience in the Python world is that pylint is less interesting than FindBugs: many of the more interesting bugs are hard problems in a dynamically typed language and so it has more "religious style issues" built in that are easier to test for. It still provides a great deal of useful output once configured correctly, and can help enforce a consistent coding standard.

  • by iamwoodyjones ( 562550 ) on Monday May 19, 2008 @01:04PM (#23464208) Journal
    I have used static analysis as part of our build process on our Continous Integration machines and it's definitely worth your time to set it up and use it. We use FindBugs with our Java code and have it output html reports on a nightly basis. Our team lead comes in early in the morning and peruses them and assigns them to either "Suppress" or fix the issues. We shoot for zero bugs either through suppressing them if they aren't bugs or by fixing them. FindBugs doesn't give too many false positives so it works great.

    Could this be just another trend?

    I don't worry about what's "trendy" or not. Just give the tool a shot in your group and see if it helps/works for you or not. If it does keep using it otherwise abandon it.

    What kind of changes did the tools bring about in your testing cycle?

    We use it _before_ the test cycle. We use it to catch mistakes such as "Whoops! Dereferenced a pointer there, my bad" before going into the test cycle.

    And most importantly, did the results justify the expense?

    Absolutely. The startup cost of adding static analysis for us was one developer for 1/2 a day to setup FindBugs to work on our CI build on a nightly basis to give us HTML reports. After that, the cost is our team lead to check the reports in the morning (he's an early riser) and create bug reports based on them to send to us. Some days there's no reports, other days (after a large check-in) it might be 5-10 and about an hour of his time.

    It's best to view this tool as preventing bugs, synchronization issues, performance issues, you name it issues before going into the hands of testers. But, you can extend several of the tools like FindBugs to be able to add new static analysis test cases. So if a tester finds a common problem that effects the code you can go back and write a static analysis case for that, add it to the tool and the problem shouldn't reach the tester again.

  • Buyer (User) Beware (Score:3, Interesting)

    by normanjd ( 1290602 ) on Monday May 19, 2008 @01:05PM (#23464228) Homepage
    We use them more for optimizing code than anything else... The biggest problem we see is that there are often false positives... A senior person can easily look at recomendations and pick whats needed... A junior person, not so much, which we learned the hard way...
  • by deep-deep-blue ( 1055812 ) on Monday May 19, 2008 @01:07PM (#23464242)
    Another good point for using lint, is that after a while a programmer learns the the way, and the outcome is a better code in a shorter time. Of course I also found that are a few ways to avoid lint errors/warnings in a way that lead to some very ugly bugs.
  • Many of never all (Score:5, Informative)

    by mugnyte ( 203225 ) on Monday May 19, 2008 @01:13PM (#23464302) Journal

      Short version:

          There are real bugs, with huge consequences, that can be detected with static analysis.
          The tools are easy to find and worth the price, depending on the customer base you have.
          In the end, that cannot detect "all" bugs that could arise in the code.

      Worth it?
          Only you can decide, but after a few sessions learning why tools flag suspect code, if you take those suggest to heart, you will be a better coder.
  • Absolutely (Score:2, Interesting)

    At my little corner of Lockheed Martin we use Klocwork [klocwork.com] and LDRA [ldra.com] to analyze C/C++ embedded code for military hardware. Since the various compilers for each contract aren't nearly as full-featured as say, Visual Studio or Eclipse, I've found static code analysis tools invaluable. Can't comment on the cost/results ratio though, since I don't purchase stuff. =)
  • by ncw ( 59013 ) on Monday May 19, 2008 @01:19PM (#23464382) Homepage
    The linux kernel developers use a tool originally written by Linux Torvalds for static analysis - sparse.

    http://www.kernel.org/pub/software/devel/sparse/ [kernel.org]

    Sparse has some features targeted at kernel development - for instance spotting mixing up kernel and user space pointers and a system of code annotations.

    I haven't used it but I do see on the kernel mailing list that it regularly finds bugs.
    • Re: (Score:3, Funny)

      by ncw ( 59013 )
      s/Linux Torvalds/Linus Torvalds/ - I keep making that typo ;-)
  • WHOA... nice timing (Score:2, Interesting)

    by w00f ( 872376 )
    YOU sir, have amazing timing! I just wrote a 2-part article on this topic! Interesting... mine was published http://portal.spidynamics.com/blogs/rafal/archive/2008/05/06/Static-Code-Analysis-Failures.aspx The Solution: http://portal.spidynamics.com/blogs/rafal/archive/2008/05/15/Hybrid-Analysis-_2D00_-The-Answer-to-Static-Code-Analysis-Shortcomings.aspx [spidynamics.com] Comments welcome!! Interesting that this topic is getting so much attention all of the sudden
  • by nguy ( 1207026 ) on Monday May 19, 2008 @01:25PM (#23464448)
    Generally, these tools make up for deficiencies in the underlying languages; better languages can guarantee absence of these errors through their type systems and other constructs. Furthermore, these tools can't give you yes/no answers, they only warn you about potential sources of problems, and many of those warnings are spurious.

    I've never gotten anything useful out of these tools. Generally, encapsulating unsafe operations, assertions, unit testing, and using valgrind, seem both necessary and sufficient for reliably eliminating bugs in C++. And whenever I can, I simply use better languages.
  • by tikal_work ( 885055 ) on Monday May 19, 2008 @01:36PM (#23464566)

    Something that we've found incredibly useful here and in past workplaces was to watch the _differences_ between Gimpel PC-Lint runs, rather than just the whole output.

    The output for one of our projects, even with custom error suppression and a large number of "fixups" for lint, borders on 120MiB of text. But you can quickly reduce this to a "status report" consisting of statistics about the number of errors -- and with a line-number-aware diff tool, report just any new stuff of interest. It's easy to flag common categories of problems for your engine to raise these to the top of the notification e-mails.

    Keeping all this data around (it's text, it compresses really well) allows you to mine it in the future. We've had several cases where Lint caught wind of something early on, but it was lost in the noise or a rush to get a milestone out -- when we find and fix it, we're able to quickly audit old lint reports both for when it was introduced and also if there are indicators that it's happening in other places.

    And you can do some fun things like do analysis of types of warnings generated by author, etc -- play games with yourself to lower your lint "score" over time...

    The big thing is keeping a bit of time for maintenance (not more than an hour a week, at this point) so that the signal/noise ratio of the diffs and stats reports that are mailed out stays high. Talking to your developers about what they like / don't like and tailoring the reports over time helps a lot -- and it's an opportunity to get some surreptitious programming language education done, too.

  • To summarize... (Score:3, Insightful)

    by kclittle ( 625128 ) on Monday May 19, 2008 @01:49PM (#23464708)
    Static analysis is a tool. In good hands, it is a valuable tool. In expert hands, it can be invaluable, catching really subtle bugs that only show up in situations unlike anything you've ever tested -- or imagined to test. You know, situations like what your customers will experience the weekend after a major upgrade (no joking...)
  • by Incster ( 1002638 ) on Monday May 19, 2008 @01:54PM (#23464764)
    You should strive to make your code as clean as possible. Turn on maximum warnings from your compiler, and don't allow code that generates warnings to be checked in to your source repository. Use static analysis tools, and make sure your code passes without issue there as well. These tools will generate many false positives, but if you learn to write in a style that avoids triggering warnings, quality will go up. You may be smarter than Lint, but the next guy that works on the code may not be. Static analysis tools are just another tool in the tool box. Also use dynamic analysis tools like Purify, valgrind, or whatever works in your environment. Writing quality code is hard. You need all the help you can get.
  • A double edged sword (Score:3, Informative)

    by xenocide2 ( 231786 ) on Monday May 19, 2008 @02:34PM (#23465246) Homepage
    Static analysis tools are common in the open source world. The lint name is well known enough that many projects make theirs a pun on it, ala lintian. A few years ago a local root exploit in X was discovered by running these sorts of checks. But generally, static analysis tools require human review -- with large code bases they generate large numbers of false positives, especially the dumber ones. This leads to trouble for perfectionists, a common trait among software developers interested in bug fix analysis. For example, the recent massive Debian vulnerability was caused by an overzealous developer trying to fix static analysis flags. One of these flags was valid, one was not, and removing both removed nearly all entropy from the RNG.

    In the more general sense, static analysis cannot find all bugs. There's a trivial proof: a program stuck in an infinite loop is a bug, but finding all such loops would solve the halting problem. Handling interrupts and the like also causes reasoning problems, as it's very hard, if not computationally intractable, to prove multi-threaded software is safe. So static analysis won't rid the embedded world of watchdog timers and other software failure recovery crap.
  • they are useful (Score:3, Insightful)

    by elmartinos ( 228710 ) on Monday May 19, 2008 @03:10PM (#23465700) Homepage
    Its not a trend, it is something developers have been doing for a long time. We have a build system here that automatically compiles and runs unit tests, and when something fails the developers gets an email. We try to automate as much as possible, so we also have several static code analysis tools like PMD, Findbugs, Checkstyle installed. All of them are not perfect, but they all detect at least some problems; its better than nothing. It is also important that these tools can be switched off so that they don't get annoying. PMD does this very nicely, you can disable checks on a method based granularity with a simple annotation at places where appropriate.
  • by Animats ( 122034 ) on Monday May 19, 2008 @03:22PM (#23465834) Homepage

    Several posters have cited the "halting problem" as an issue. It's not.

    First, the halting problem does not apply to deterministic systems with finite memory. In a deterministic system with finite memory, eventually you must repeat a state, or halt. So that disposes of the theoretical objection.

    In practice, deciding halting isn't that hard. The general idea is that you have to find some "measure" of each loop which is an integer, gets smaller with each loop iteration, and never goes negative. If you can come up with a measure expression for which all those properties are true, you have proved termination. If you can't, the program is probably broken anyway. Yes, it's possible to write loops for which proof of termination is very hard. Few such programs are useful. I've actually encountered only one in a long career, the termination condition for the GJK algorithm for collision detection of convex polyhedra. That took months of work and consulting with a professor at Oxford.

    The real problem with program verification is the C programming language. In C, the compiler has no clue what's going on with arrays, because of the "pointer=array" mistake. You can't even talk about the size of a non-fixed array in the language. This is the cause of most of the buffer overflows in the world. Every day, millions of computers crash and millions are penetrated by hostile code from this single bad design decision.

    That's why I got out of program verification when C replaced Pascal. I used to do [acm.org] this stuff. [animats.com]

    Good program verification systems have been written for Modula 3, Java, C#, and Verilog. For C, though, there just isn't enough information in the source to do it right. Commercial tools exist, but they all have holes in them.

One man's constant is another man's variable. -- A.J. Perlis

Working...