Do Static Source Code Analysis Tools Really Work? 345
jlunavtgrad writes "I recently attended an embedded engineering conference and was surprised at how many vendors were selling tools to analyze source code and scan for bugs, without ever running the code. These static software analysis tools claim they can catch NULL pointer dereferences, buffer overflow vulnerabilities, race conditions and memory leaks. Ive heard of Lint and its limitations, but it seems that this newer generation of tools could change the face of software development. Or, could this be just another trend? Has anyone in the Slashdot community used similar tools on their code? What kind of changes did the tools bring about in your testing cycle? And most importantly, did the results justify the expense?"
In Short, Yes (Score:5, Informative)
Yes (Score:4, Informative)
Re:Just like compiler warnings... (Score:5, Informative)
Testing cycle (Score:5, Informative)
Since we've had the tool for a while and have fixed most of the bugs it has found, we are required to run static analysis on new code for the latest release now (i.e. we should not be dropping any new code that has any error in it found via static analysis).
Just like code reviews, unit testing, etc., it has proved useful and was added to the software development process.
Yes. (Score:4, Informative)
Yes (Score:5, Informative)
Add me to the Yes column
We use them (PMD and FindBugs) for eliminating code that is perfectly valid, yet has bitten us in the past. Two Java examples are unsynchronized access to a static DateFormat object and using the commons IOUtils.copy() instead of IOUtils.copyLarge().
Most tools are easy to add to your build cycle and repay that effort after the first violation
Re:Yes. (Score:5, Informative)
Re:Yes. (Score:5, Informative)
Re:In Short, Yes (Score:4, Informative)
Coverity Reports Open Source Security Making Grea (Score:5, Informative)
http://it.slashdot.org/article.pl?sid=08/01/11/1818241 [slashdot.org]
- doug
Re:OSS usage (Score:3, Informative)
Coverity & Klocwork (Score:5, Informative)
My comments would be:
(1) Klockwork & Coverity tend to produce a lot of "false positives". And by a lot, I mean, *A LOT*. For every 10000 "critical" bugs reported by the tool, only a handful may be really worth investigating. So you may spend a fair bit of time simply weeding through what is useful and what isn't.
(2) They're expensive. Coverity costs $50k for every 500k lines of code per year... We have a LOT more code than this. For the price, we could hire a couple of guys to run all of our tools through Purify *and* fix the bugs they found. Klocwork is cheaper; $4k per seat, minimum number of seats.
(3) They're slow. It takes several days running non-stop on our codebase to produce the static analysis databases. For big projects, you'll need to set aside a beefy machine to be a dedicated server. With big projects, there will be lots of bug information, so the clients tend to get bogged down, too.
In short: It all depends on how "mission critical" your code is; is it important, to you, to find that *one* line of code that could compromise your system? Or is your software project a bit more tolerant? (e.g., If you're writing nuclear reactor software, it's probably worthwhile to you to run this code. If you're writing a video game, where you can frequently release patches to the customer, it's probably not worth your while.)
Re:Yes. (Score:4, Informative)
Jetbrains IntelliJ IDEA and Resharper (Score:3, Informative)
Of course, they provide a heck of a lot more than just static code analysis, but the ability to see all syntax errors in real time, and all logic errors (like potential null-references, dead code, unnecessary 'else' statements, etc, etc) saves way too much time, and has, in my experience, resulted in much better, more solid code. When you add on all the intelligent refactoring, vastly improved code navigation, and customizable code-generation features of these utilities, it's a no-brainer.
I wouldn't program without them.
Re:Not Yet, In My Personal Experience. (Score:4, Informative)
Re:Yes. (Score:4, Informative)
It is only one source for the entropy pool and the SSL "fix" was a Debian maintainer running valgrind on OpenSSL, finding a piece of code where uninitialized memory was accessed, "fixed" it and a "similar piece" and accidently removed all entropy from the pool. The result of that is, that all ssh-keys and ssl-certs created on Debian in the last 20 months are to be considered broken. (Debian Wiki SSLkeys on the scope and what to do [debian.org])
Yes, absolutely (Score:5, Informative)
FindBugs is becoming increasingly widespread on Java projects, for example. I found that between it and JLint I could identify a substantial chunk of problems caused by inexperienced programmers, poor design, hastily written code, etc. JLint was particularly nice for potential deadlocks, while FindBugs was good for just about everything else.
For example:
At least in the Java world, I wish more people would use them. It would make my job so much easier.
My experience in the Python world is that pylint is less interesting than FindBugs: many of the more interesting bugs are hard problems in a dynamically typed language and so it has more "religious style issues" built in that are easier to test for. It still provides a great deal of useful output once configured correctly, and can help enforce a consistent coding standard.
In short, YMMV (Score:5, Informative)
The thing is, these tools produce
A) a lot of "false positives", code which is really OK and everyone understand why it's ok, but the tool will still complain, and
B) usually includes some metrics of dubious quality at best, to be taken only as a signal for a human to look at it and understand why it's ok or not ok.
E.g., ne such tool, which I had the misfortune of sitting through a salesman hype session of, seemed to be really little more than a glorified grep. It really just looked at the source text, not at what's happening. So for example if you got a database connection and a statement in a "try" block, it wanted to see the close statements in the "finally" block.
Well, applied to an actual project, there was a method which just closed the connection and the statements supplied as an array. Just because, you know, it's freaking stupid to copy-and-paste cute little "if (connection != null) { try { connection.close(); } catch (SQLException e) {
Other examples include more mundane stuff like the tools recommending that you synchronize or un-synchronize a getter, even when everyone understands why it's OK for it to be as it is.
E.g., a _stateless_ class as a singleton is just an (arguably premature and unneded) speed optimization, because some people think they're saving so much by a singleton instead of the couple of cycles it takes to do a new on a class with no members and no state. It doesn't really freaking matter if there's exactly one of it, or someone gets a copy of it. But invariably the tools will make an "OMG, unsynchronized singleton" fuss, because they don't look deep enough to see if there's actually some state that must be unique.
Etc.
Now taken as something that each developper understands, runs on his own when he needs it, and uses his judgment of each point, it's a damn good thing anyway.
Enter the clueless PHB with a metric and chart fetish, stage left. This guy doesn't understand what those things are, but might make it his personal duty to chart some progress by showing how much fewer warnings he's got from the team this week than last week. So useless man-hours are spent on useless morphing perfectly good code, into something that games the tool. For each 1 real bug found, there'll be 100 harmless warnings that he makes it his personal mission to get out of the code.
Enter the snake-oil vendor's salesman, stage right. This guy only cares about selling some extra copies to justify his salary. He'll hype to the boss exactly the possibility to generate such charts (out of mostly false positives) and manage by such charts. If the boss wasn't already in a mind to do that management anti-pattern, the salesman will try to teach him to. 'Cause that's usually the only advantage that his expensive tool has over those open source tools that you mention.
I'm not kidding. I actually tried to corner one into;
Me: "ok, but you said not everything it flags there is a bug, right?"
Him: "Yes, you need to actually look at them and see if they're bugs or not."
Me: "Then what sense does it make to generate charts based on wholesale counting entities which may, or may not be bugs?"
Him: "Well, you can use the charts to see, say, a trend that you have less of them over time, so the project is getting better."
Me: "But they may or may not be actual bugs. How do you know if this week's mix has more or less actual bugs than last weeks, regardless of wh
Coverity Prevent Rocks (Score:5, Informative)
* I really like Insure, but it is difficult to set up on a system composed of many shared libraries. However, there are some bugs that really need run-time analysis to catch.
Re:In Short, Yes (Score:3, Informative)
Many of never all (Score:5, Informative)
Short version:
There are real bugs, with huge consequences, that can be detected with static analysis.
The tools are easy to find and worth the price, depending on the customer base you have.
In the end, that cannot detect "all" bugs that could arise in the code.
Worth it?
Only you can decide, but after a few sessions learning why tools flag suspect code, if you take those suggest to heart, you will be a better coder.
Linux kernel devs use sparse for static analysis (Score:5, Informative)
http://www.kernel.org/pub/software/devel/sparse/ [kernel.org]
Sparse has some features targeted at kernel development - for instance spotting mixing up kernel and user space pointers and a system of code annotations.
I haven't used it but I do see on the kernel mailing list that it regularly finds bugs.
Re:In Short, Yes (Score:5, Informative)
My group at work recently bought one of these. They catch a lot of things that compilers don't -- for example, code like this:
.. where invalid input causes arbitrarily bad behavior. They also tend to be better at inter-procedural analysis than compilers, so they can warn you that you're passing a short literal string to a function that will memcpy() from the region after that string. They do have a lot of false positives, but what escapes from compilers to be caught by static analysis tools tend to be dynamic behavior problems that are easy to overlook in testing. (If the problem were so obvious, the coder would have avoided it in the first place, right?)
Re:In Short, Yes (Score:5, Informative)
Re:In Short, Yes (Score:2, Informative)
But I guess dumb developer checkers have a place in the toolbox.
A double edged sword (Score:3, Informative)
In the more general sense, static analysis cannot find all bugs. There's a trivial proof: a program stuck in an infinite loop is a bug, but finding all such loops would solve the halting problem. Handling interrupts and the like also causes reasoning problems, as it's very hard, if not computationally intractable, to prove multi-threaded software is safe. So static analysis won't rid the embedded world of watchdog timers and other software failure recovery crap.
Re:who proved Astrée ...? (Score:3, Informative)
Re:In Short, Yes (Score:3, Informative)
Re:In short, YMMV (Score:5, Informative)
Going out of the way to satisfy a tool, whose only reason to exist is to flag 10 times more stuff than -Wall, I found actually counter-productive.
And I don't mean just as in, WOMBAT (Waste Of Money Brains And Time.) I mean as in: it teaches people to game the tool, actually hiding their real bugs. And it creates a false sense of security too.
I've actually had to deal with a program which tested superbly on most metrics of such a tool. But only because the programmer had learned to game it. The program was really an incoherent and buggy mess. But it gamed every single tool they had in use.
A. to start with the most obvious, some bright guy there had come up with an own CVS script which didn't let you check in, unless you had commented every single method, and every single parameter and exception thrown. Bout damn time, eh? Wrong.
1. This forced people to effectively overwrite the comments inherited from better documented stuff. E.g., if you had a MyGizmoInterface interface, which was superbly documented, and the MyGizmoImpl class implementing it, it forced you to copy and paste the JavaDoc comments instead of just letting JavaDoc pick them from the interface. So instead of seeing the real docs, now everyone had docs all over the place along the lines of "See MyGizmoInterface.gizmoMethod()" overwriting the actually useful ones there. Or some copied and pasted comments from 1 year ago, where one of the two gradually became obsolete. People would update their comments in one of the two, but let the other say something that wasn't even true any more. Instead of having them in one place, and letting JavaDoc copy them automatically.
2. The particular coder of this particular program, had just used his counter-script or maybe plugin, to automatically generate crap like: I mean, _literally_. Hundreds of methods had "Method description" as their javadoc comment, and thousands of parameters total were described as "method parameter."
B. It also included such... brain-dead metrics as measuring the cohesion of each class, by the ratio between number of class members to class methods.
He had learned to game that too. His code tested as superbly cohesive, although the same class and indeed the same method, could either send an email, or render a PDF, or update an XML in the database, depending on which parameters they got. But the members to methods ratio was grrrreat.
That's really my problem with it:
A. Somewhere along the way, they had become so confident in their tools, that noone actually even checked what javadoc comments those classes have. Their script already checks that there are comments, hey, that's enough.
B. Somewhere along the way, everyone had gotten used to just gaming a stupid tool. If the tool said you have too many or too few class members, you'd just add or remove some to keep it happy. If it complained about complexity, because it considered a large switch statement to have too many effective ifs, you just split it into a several functions: one testing cases 1 to 10, one testing 11 to 20, and so on. Which actually made the code _less_ readable, and generally lower quality. There would have been ways to solve the problems better, but, eh, all that mattered was to keep the tool happy, so noone bothered.
That's why I'd rather not turn it into a religion. Use the tool, yes, but take it as just something which you need to check and use your own judgment. Don't lose track of which is the end, and which is merely a means to that end.
Re:In Short, Yes (Score:2, Informative)
Code quality isn't something that's just tacked on at the end of the process. If the design process is done well, there is less chance to bugs creeping in - it's just natural that logical well-designed code is going to be less buggy.
Re:In short, YMMV (Score:3, Informative)
This kind of thing though, is ultimately a failure of management, whoever leads/runs the dev team. They should be able to see this kind of thing happening and either apply some proper motivation, change the procedures, or let some bad devs go.
Mind you, bad developers as well. But if I were the owner, the dev mgr would get the brunt first on something like this.
Re:In Short, Yes (Score:3, Informative)
The analyzers wouldn't be very useful if they had to fork at every code branch. A program with not-too-many nested decision branches would quickly become unexaminable.
Re:who proved Astrée ...? (Score:3, Informative)
So for example, the program won't ever divide by zero or overflow an integer while summing.