Do Static Source Code Analysis Tools Really Work? 345
jlunavtgrad writes "I recently attended an embedded engineering conference and was surprised at how many vendors were selling tools to analyze source code and scan for bugs, without ever running the code. These static software analysis tools claim they can catch NULL pointer dereferences, buffer overflow vulnerabilities, race conditions and memory leaks. Ive heard of Lint and its limitations, but it seems that this newer generation of tools could change the face of software development. Or, could this be just another trend? Has anyone in the Slashdot community used similar tools on their code? What kind of changes did the tools bring about in your testing cycle? And most importantly, did the results justify the expense?"
In Short, Yes (Score:5, Informative)
Re: (Score:3, Interesting)
Re:In Short, Yes (Score:5, Informative)
My group at work recently bought one of these. They catch a lot of things that compilers don't -- for example, code like this:
.. where invalid input causes arbitrarily bad behavior. They also tend to be better at inter-procedural analysis than compilers, so they can warn you that you're passing a short literal string to a function that will memcpy() from the region after that string. They do have a lot of false positives, but what escapes from compilers to be caught by static analysis tools tend to be dynamic behavior problems that are easy to overlook in testing. (If the problem were so obvious, the coder would have avoided it in the first place, right?)
Re:In Short, Yes (Score:5, Funny)
Then I realised it was just the HTML screwing up a less-than symbol. Then I felt a bit silly.
Then I just had to tell someone....
Re:In Short, Yes (Score:5, Insightful)
Would it not make sense to run this tool to catch these types of errors before wasting everyones time in a code review?
By the time you get to code review and test, you should be catching logic errors, not stupid syntactical and poor code style ones. If the tool helps a developer clean up and catch the obvious stuff, then testing can be much more productive catching the real problems.
Basically if the tool helps reduce errors then it is useful. Same comment goes for code complexity checkers. No tool will catch everything though, but then again you shouldn't be depending on it to.
Re: (Score:3, Insightful)
By the time you get to code review and test, you should be catching logic errors, not stupid syntactical and poor code style ones. If the tool helps a developer clean up and catch the obvious stuff, then testing can be much more productive catching the real problems.
Sounds like a good way to teach developers about these stupid errors as well. As someone whose knowledge of programming is self taught I learned a long time ago to pay attention to all errors, warnings and output from tools like lint to add to my understanding of the correct way to do things.
Re:In Short, Yes (Score:4, Insightful)
Re:In Short, Yes (Score:4, Informative)
Re: (Score:3, Informative)
Re:In Short, Yes (Score:5, Informative)
Re: (Score:3, Interesting)
The C/C++ version does MISRA-C [misra-c2.com] (the C used in the automotive industry) too.
There's also a version for Ada [wikipedia.org], of course.
I second valgrind (Score:5, Funny)
It's great for finding all those elusive bits of code that might be accidentally seeding a pseudo-random number generator somewhere.
In short, YMMV (Score:5, Informative)
The thing is, these tools produce
A) a lot of "false positives", code which is really OK and everyone understand why it's ok, but the tool will still complain, and
B) usually includes some metrics of dubious quality at best, to be taken only as a signal for a human to look at it and understand why it's ok or not ok.
E.g., ne such tool, which I had the misfortune of sitting through a salesman hype session of, seemed to be really little more than a glorified grep. It really just looked at the source text, not at what's happening. So for example if you got a database connection and a statement in a "try" block, it wanted to see the close statements in the "finally" block.
Well, applied to an actual project, there was a method which just closed the connection and the statements supplied as an array. Just because, you know, it's freaking stupid to copy-and-paste cute little "if (connection != null) { try { connection.close(); } catch (SQLException e) {
Other examples include more mundane stuff like the tools recommending that you synchronize or un-synchronize a getter, even when everyone understands why it's OK for it to be as it is.
E.g., a _stateless_ class as a singleton is just an (arguably premature and unneded) speed optimization, because some people think they're saving so much by a singleton instead of the couple of cycles it takes to do a new on a class with no members and no state. It doesn't really freaking matter if there's exactly one of it, or someone gets a copy of it. But invariably the tools will make an "OMG, unsynchronized singleton" fuss, because they don't look deep enough to see if there's actually some state that must be unique.
Etc.
Now taken as something that each developper understands, runs on his own when he needs it, and uses his judgment of each point, it's a damn good thing anyway.
Enter the clueless PHB with a metric and chart fetish, stage left. This guy doesn't understand what those things are, but might make it his personal duty to chart some progress by showing how much fewer warnings he's got from the team this week than last week. So useless man-hours are spent on useless morphing perfectly good code, into something that games the tool. For each 1 real bug found, there'll be 100 harmless warnings that he makes it his personal mission to get out of the code.
Enter the snake-oil vendor's salesman, stage right. This guy only cares about selling some extra copies to justify his salary. He'll hype to the boss exactly the possibility to generate such charts (out of mostly false positives) and manage by such charts. If the boss wasn't already in a mind to do that management anti-pattern, the salesman will try to teach him to. 'Cause that's usually the only advantage that his expensive tool has over those open source tools that you mention.
I'm not kidding. I actually tried to corner one into;
Me: "ok, but you said not everything it flags there is a bug, right?"
Him: "Yes, you need to actually look at them and see if they're bugs or not."
Me: "Then what sense does it make to generate charts based on wholesale counting entities which may, or may not be bugs?"
Him: "Well, you can use the charts to see, say, a trend that you have less of them over time, so the project is getting better."
Me: "But they may or may not be actual bugs. How do you know if this week's mix has more or less actual bugs than last weeks, regardless of wh
Re: (Score:2)
The problem is that
Re:In short, YMMV (Score:4, Interesting)
Re:In short, YMMV (Score:5, Insightful)
Re:In short, YMMV (Score:5, Informative)
Going out of the way to satisfy a tool, whose only reason to exist is to flag 10 times more stuff than -Wall, I found actually counter-productive.
And I don't mean just as in, WOMBAT (Waste Of Money Brains And Time.) I mean as in: it teaches people to game the tool, actually hiding their real bugs. And it creates a false sense of security too.
I've actually had to deal with a program which tested superbly on most metrics of such a tool. But only because the programmer had learned to game it. The program was really an incoherent and buggy mess. But it gamed every single tool they had in use.
A. to start with the most obvious, some bright guy there had come up with an own CVS script which didn't let you check in, unless you had commented every single method, and every single parameter and exception thrown. Bout damn time, eh? Wrong.
1. This forced people to effectively overwrite the comments inherited from better documented stuff. E.g., if you had a MyGizmoInterface interface, which was superbly documented, and the MyGizmoImpl class implementing it, it forced you to copy and paste the JavaDoc comments instead of just letting JavaDoc pick them from the interface. So instead of seeing the real docs, now everyone had docs all over the place along the lines of "See MyGizmoInterface.gizmoMethod()" overwriting the actually useful ones there. Or some copied and pasted comments from 1 year ago, where one of the two gradually became obsolete. People would update their comments in one of the two, but let the other say something that wasn't even true any more. Instead of having them in one place, and letting JavaDoc copy them automatically.
2. The particular coder of this particular program, had just used his counter-script or maybe plugin, to automatically generate crap like: I mean, _literally_. Hundreds of methods had "Method description" as their javadoc comment, and thousands of parameters total were described as "method parameter."
B. It also included such... brain-dead metrics as measuring the cohesion of each class, by the ratio between number of class members to class methods.
He had learned to game that too. His code tested as superbly cohesive, although the same class and indeed the same method, could either send an email, or render a PDF, or update an XML in the database, depending on which parameters they got. But the members to methods ratio was grrrreat.
That's really my problem with it:
A. Somewhere along the way, they had become so confident in their tools, that noone actually even checked what javadoc comments those classes have. Their script already checks that there are comments, hey, that's enough.
B. Somewhere along the way, everyone had gotten used to just gaming a stupid tool. If the tool said you have too many or too few class members, you'd just add or remove some to keep it happy. If it complained about complexity, because it considered a large switch statement to have too many effective ifs, you just split it into a several functions: one testing cases 1 to 10, one testing 11 to 20, and so on. Which actually made the code _less_ readable, and generally lower quality. There would have been ways to solve the problems better, but, eh, all that mattered was to keep the tool happy, so noone bothered.
That's why I'd rather not turn it into a religion. Use the tool, yes, but take it as just something which you need to check and use your own judgment. Don't lose track of which is the end, and which is merely a means to that end.
Re: (Score:3, Informative)
This kind of thing though, is ultimately a failure of management, whoever leads/runs the dev team. They should be able to see this kind of thing happening and either apply some proper motivation, change the procedures, or let some bad devs go.
Mind you, bad developers as well. But if I were the owner, the dev mgr would get the brunt first on something like this.
Another line to use. (Score:4, Insightful)
Him: "Yes, you need to actually look at them and see if they're bugs or not."
Me: "Then what sense does it make to generate charts based on wholesale counting
entities which may, or may not be bugs?"
Him: "Well, you can use the charts to see, say, a trend that you have less
of them over time, so the project is getting better."
Me: "But they may or may not be actual bugs. How do you know if this week's
mix has more or less actual bugs than last weeks, regardless of what the
total there is?"
Him: "Well, yes, you need to actually look at them in turn to see which are actual bugs."
Me: "But that's not what the tool counts. It counts a total which includes an
unknown, and likely majority, number of false positives."
Him: "Well, yes."
Me: "So what use is that kind of a chart then?"
Him: "Well, you can get a line or bar graph that shows how much progress
is made in removing them."
Your next line is:
Me: "So you're selling us a tool that generates a lot of false warnings
and a measurement on how much unnecessary extra work we've done to
eliminate the false warnings. Wouldn't it make more sense not to use
the tool in the first place and spend that time actually fixing real bugs?"
To work this question must be asked with the near-hypnotized manager watching.
Re:Another line to use. (Score:4, Insightful)
You dont just run the tool over and over again and never adapt it to your code.
If it produces a bunch of false positives, then you go in and modify the rules to not generate those false positives.
Thats half the point of something like this, you need to tune it to your project.
The flip side is that if you see some devs over and over making the same kind of mistake, well you can write a new rule in it to flag that kind of thing.
If you have an endless number of false positives, that doesnt ever go down, then you are either:
1. Not using the tool correctly.
or
2. Not working on a project that is amenable to this tool.
IME, the vast majority of time its #1. Now you may find that for certain small or narrowly scoped projects, or those worked on by 2 super-gurus, that the overhead for learning and tuning the tool for that project isnt worth it. But thats something you'd have to find out yourself, and it differs from project to project.
Re:In Short, Yes (Score:5, Interesting)
Re: (Score:3, Interesting)
In particular, I've found FindBugs has an amazing degree of precision considering it's an automated tool. If it comes up with a "red" error, it's almost certainly something that should be
Re:In Short, Yes (Score:5, Insightful)
Unless you become lazy (Score:3, Insightful)
Unless, engineers begin to rely on them! If I stop thinking about referencing null pointers because my tool catches 90% of them, I haven't gained a thing.
Re:In Short, Yes (Score:5, Insightful)
The only real significance of the halting problem is to demonstrate that there can be some pretty absurd programs out there. It is not an indictment of static analyses. Nor is it an excuse to have less than total confidence in the correctness of your code.
Re: (Score:3, Informative)
Re:In Short, Yes (Score:4, Insightful)
That would be one of the absurd programs the GP was slamming. But a program where the break condition depends on, say, the user's input isn't amenable to static analysis and is perfectly reasonable and useful.
But you don't need to be perfect to be decent. A lot of static analysis can't tell what will happen, but can warn you if some code is unreachable, if no path will ever free memory, if a loop runs off the end of a memory allocation, etc.
The Linux kernel uses a lot of static checking tools to pretty great effect (sparse, for one, is extremely helpful, and the Stanford checker found a lot of problems too).
Re: (Score:3, Funny)
Re: (Score:3, Informative)
The analyzers wouldn't be very useful if they had to fork at every co
Yes. (Score:4, Insightful)
Re: (Score:2, Insightful)
Not that I actually read anything about the SSL "fix".
Re:Yes. (Score:5, Informative)
Re:Yes. (Score:4, Informative)
Re:Yes. (Score:4, Informative)
It is only one source for the entropy pool and the SSL "fix" was a Debian maintainer running valgrind on OpenSSL, finding a piece of code where uninitialized memory was accessed, "fixed" it and a "similar piece" and accidently removed all entropy from the pool. The result of that is, that all ssh-keys and ssl-certs created on Debian in the last 20 months are to be considered broken. (Debian Wiki SSLkeys on the scope and what to do [debian.org])
Re:Yes. (Score:5, Informative)
Re: (Score:3, Insightful)
While the parent makes a good point that results are not always easy to understand or fix - since the original post is about static vs run-time analysis tools, it's good to understand that they each have their problems.
Just like compiler warnings... (Score:5, Insightful)
It has found some real bugs that are hard to generate a testcase for. It has also found a lot of things that aren't bugs, just like -Wall can. Since I work in the virtual memory manager, a lot more of our bugs can be found just by booting, compared to other domains, so we didn't get a lot of new bugs when we started using static analysis. But even one bug prevented can be work multiple millions of dollars.
My experience is that, just like enabling compiler warnings, any way you have to find a bug before it gets to a customer is worth it.
Re:Just like compiler warnings... (Score:5, Informative)
OSS usage (Score:5, Insightful)
Coverity Reports Open Source Security Making Grea (Score:5, Informative)
http://it.slashdot.org/article.pl?sid=08/01/11/1818241 [slashdot.org]
- doug
Coverity Prevent Rocks (Score:5, Informative)
* I really like Insure, but it is difficult to set up on a system composed of many shared libraries. However, there are some bugs that really need run-time analysis to catch.
Re: (Score:3, Informative)
Static analysis tools (Score:4, Interesting)
I've also used Polyspace. In my opinion, it is expensive, slow, can't handle some constructs well and has a *horrible* signal to noise ratio. There is also no mechanism for silencing warnings in future runs of the tool(like the -e flag in lint). On the other hand, it has caught a (very) few issues that PC-Lint missed. Is it worth it? I suppose it depends if you are writing systems that can kill people if something goes wrong.
Potential issues are the biggest drawback (Score:2)
I've also used Polyspace. In my opinion, it is expensive, slow, can't handle some constructs well and has a *horrible* signal to noise ratio.
The signal-to-noise ratio is pretty horrendous in most static analysis tools for C and C++, IME. This is my biggest problem with them. If I have to go through and document literally thousands of cases where a perfectly legitimate and well-defined code construct should be allowed without a warning because the tool isn't quite sure, I rapidly lose any real benefit and everyone just starts ignoring the tool output. Things like Lint's -e option aren't much good as workarounds either, because then even if you'
signal to noise (Score:2)
Re:signal to noise (Score:4, Insightful)
That code is simply in poor taste, even if it works. What PC-Lint, and good taste, say you should do is change the code to "if( (x=y) != 0 ) {}". This will satisfy PC-Lint, and also makes your intention very clear to the next programmer who comes along. And, best of all, it doesn't generate a single byte of extra code, because you've only made explicit what the compiler was going to do anyway.
Re: (Score:3, Funny)
Re: (Score:3)
You really can't beat it for the money, and it is probably as comprehensive as some of the other more expensive products for C and C++.
They do work (Score:5, Interesting)
Even lint is decent -- the trick is just using it in the first place. As for expense, if you have more than, oh, 3 developers, they pay for themselves by your first release. Besides, many good tools such as valgrind are free (valgrind isn't static, but it's still useful).
Yes (Score:4, Informative)
Re: (Score:3, Interesting)
Change bug source (Score:3, Funny)
Static analysis tools (Score:3, Interesting)
Yes, they work. (Score:5, Insightful)
Static analyzers will catch the stupid things - edge cases that fail to initialize a var, but then lead straight to de-referencing it; memory leaks on edge-case code paths, etc. that shouldn't happen but often do, and get in the way of find real bugs in your program logic.
Of course they can work (Score:5, Interesting)
It would probably be more useful if you could state which kind of problem you are trying to solve and which tools you are considering to buy. That way, people who have experience with them could suggest which work best
Re: (Score:2)
I also see how it could bring a distribution to its knees. But I agree that they will probably be worthwhile 90% of the time.
Re: (Score:3, Insightful)
Testing cycle (Score:5, Informative)
Since we've had the tool for a while and have fixed most of the bugs it has found, we are required to run static analysis on new code for the latest release now (i.e. we should not be dropping any new code that has any error in it found via static analysis).
Just like code reviews, unit testing, etc., it has proved useful and was added to the software development process.
Yes! Uh, sorta. (Score:4, Funny)
However, static code analysis is just one part of the bug-finding process. For example, in your list, in my limited experience, I have found that buffer overflows and NULL pointer derefs get spotted really well. Race conditions? Memory leaks? Hmm. Not so good.
YMMV. Don't expect magic. Oh to hellwithit, just let the end-users test it *ow!*
Yes. (Score:4, Informative)
Re:who proved Astrée ...? (Score:3, Insightful)
Is this is a proof or do some mistakenly think they're safe?
Who "proved" Astree to be error free in the first place?!
Re:who proved Astrée ...? (Score:3, Informative)
Who "proved" Astree to be error free in the first place?!
The creators of Astrée, presumably. Proving in the scientific sense that a piece of software is correct can definitely done, it's just really expensive most of the time. In any case they claim that Astrée is sound, i.e. catches all errors, but that the precision can be adjusted to reduce or increase the number of false positives, depending on how much time you have. The A380 fly-by-wire analysis was apparently the first case where no false positives were reported (and no true positives either, of
Re: (Score:3, Informative)
So for example, the program won't ever divide by zero or overflow an integer while summing.
Yes (Score:5, Informative)
Add me to the Yes column
We use them (PMD and FindBugs) for eliminating code that is perfectly valid, yet has bitten us in the past. Two Java examples are unsynchronized access to a static DateFormat object and using the commons IOUtils.copy() instead of IOUtils.copyLarge().
Most tools are easy to add to your build cycle and repay that effort after the first violation
MIT Site (Score:4, Interesting)
Very useful in .Net (Score:3, Interesting)
FxCop too has gone server-side too (for those familiar with
Re: (Score:3, Insightful)
Once you develop with Resharper, you really can't go back to using VS without it... it's
Yes, But... (Score:2)
The more things change... (Score:4, Insightful)
Yes, static code analysis can help improve a team's ability to deliver a high-quality product, if it is embraced by management and its use is enforced. No, it will not change the face of software development, nor will it turn crappy code into good code or lame programmers into geniuses. At best, when engineers and management agree this is a useful tool, it can do almost all the grunt work of code cleanup by showing exactly where problem code is and suggesting extremely localized fixes. At worst, it will wind up being a half-assed code formatter since nobody can agree on whether the effort is necessary.
Just like all good software-engineering questions, the answer is 'it depends'.
Not Yet, In My Personal Experience. (Score:4, Interesting)
Re:Not Yet, In My Personal Experience. (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
Useful for planning tests (Score:5, Interesting)
I've since moved on, and I think the tool has since gone offline, but I think there's a real value to doing static analysis as part of the planning for everything else.
Coverity & Klocwork (Score:5, Informative)
My comments would be:
(1) Klockwork & Coverity tend to produce a lot of "false positives". And by a lot, I mean, *A LOT*. For every 10000 "critical" bugs reported by the tool, only a handful may be really worth investigating. So you may spend a fair bit of time simply weeding through what is useful and what isn't.
(2) They're expensive. Coverity costs $50k for every 500k lines of code per year... We have a LOT more code than this. For the price, we could hire a couple of guys to run all of our tools through Purify *and* fix the bugs they found. Klocwork is cheaper; $4k per seat, minimum number of seats.
(3) They're slow. It takes several days running non-stop on our codebase to produce the static analysis databases. For big projects, you'll need to set aside a beefy machine to be a dedicated server. With big projects, there will be lots of bug information, so the clients tend to get bogged down, too.
In short: It all depends on how "mission critical" your code is; is it important, to you, to find that *one* line of code that could compromise your system? Or is your software project a bit more tolerant? (e.g., If you're writing nuclear reactor software, it's probably worthwhile to you to run this code. If you're writing a video game, where you can frequently release patches to the customer, it's probably not worth your while.)
Re: (Score:3, Interesting)
I did some work running Coverity for EnterpriseDB, against the PostgreSQL code base (and yes, we submitted all patches back, all of which were committed).
Based on my experience:
1) Yes, Coverity produced a LOT of false positives - a few tens of thousands for the 20-odd true critical bugs we found. However, the first step in working with Coverity is configuring it to know what can be safely ignored. After about 2 days of customizing the configuration (including points where I could configure it to under
Trends or Crutches? (Score:4, Interesting)
For instance, we put men on the moon with a pencil and a slide rule. Now no one would dream of taking a high school math class with anything less than a TI-83+.
Languages like Java and C# are being hailed while languages like C are derided and many posts here on slashdot call it outmoded and say it should be done away with, yet Java and C# are built using C.
It seems to me that there is no substitute for actually knowing how things work at the most basic level and doing them by hand. Can a tool like Lint help? Yes. Will it catch everything? Likely not.
As generations of kids grow up with the automation made by generations who came before, and have less incentive to learn how the basic tools work, an incentive which will diminish, approaching 0, I think we're in for something bad.
As much as people bitch about kids who were spoiled by BASIC, you'd think that they'd also complain about all the other spoilers. Someday all this new, fancy stuff could break and someone who only knows Java, and even then checks all their source with automated tools will likely not be able to fix it.
Of course, this is more of just a general criticism and something I've been thinking about for a few weeks now. Anyway, carry on.
Re:Trends or Crutches? (Score:4, Insightful)
Re: (Score:3, Interesting)
I whipped out my trusty slide rule and commenced to using it. The teacher wanted to confiscate it and thought that I was cheating with some sort of high-tech device... mind you it was just plastic and cardboard. I'm sure you've all seen one before.
I'm on
Re: (Score:2, Insightful)
Look, people make mistakes, and regardless of how good a programmer you are, there is a limit to the amount of state you can hold in your head, and you WILL dereference a NULL pointer, or create a reference loop, at some point in your career.
Using a computer to catch these errors is just another
Re:Trends or Crutches? (Score:5, Insightful)
I also don't think new languages help bad programmers much. Bad code is still bad code so now instead of crashing it will just memory leak or just not work right.
On a software project I worked on before our competition spent two years and two million dollars did their code in visual basic and MSSQL and they abandoned their effort when no matter what hardware they threw at it they couldn't get their software to handle more than 400 concurrent users. We did our project in C and with a team for 4 built something in about a year that handled 1200 users on a quad CPU P III 400mhz Compaq. Even when another competitor posed as a client and borrowed some of my ideas (they added a comms layer instead of using the SQL server for communication) they still required a whole rack of machines to do what we did with one out of badly out of date test machine.
C is a fine tool if you know how to use it so I doubt it will go away any time soon.
Re:Man on the moon - pencil, slide rule and comput (Score:2)
It's like when Wheeler died and he was called "one of the last great titans of physics." One fellow slashdotter called this an unfair characterization as it is unfairly biased against the people who are doing work today, which he sees as no less important, comparing it to if s
To a degree, yes (Score:5, Interesting)
However these things do work and are highly recommended. If you use other advanced techniques (like Descign by Contract),they will be a lot less useful though. They are best for traditional code that does not have safety-nets (i.e. most code).
Stay away from tools that do this without using your compiler. I recently evaluated some static analysis tools found that the tools that do not use the native compilers can have serious problems. One example was an incorrecly set symbol in the internal compiler of one tool, that could easily change the code functionality drastically. Use tools that work frrom a build environment and utilize the compiler you are using to build.
Jetbrains IntelliJ IDEA and Resharper (Score:3, Informative)
Of course, they provide a heck of a lot more than just static code analysis, but the ability to see all syntax errors in real time, and all logic errors (like potential null-references, dead code, unnecessary 'else' statements, etc, etc) saves way too much time, and has, in my experience, resulted in much better, more solid code. When you add on all the intelligent refactoring, vastly improved code navigation, and customizable code-generation features of these utilities, it's a no-brainer.
I wouldn't program without them.
Yes, absolutely (Score:5, Informative)
FindBugs is becoming increasingly widespread on Java projects, for example. I found that between it and JLint I could identify a substantial chunk of problems caused by inexperienced programmers, poor design, hastily written code, etc. JLint was particularly nice for potential deadlocks, while FindBugs was good for just about everything else.
For example:
At least in the Java world, I wish more people would use them. It would make my job so much easier.
My experience in the Python world is that pylint is less interesting than FindBugs: many of the more interesting bugs are hard problems in a dynamically typed language and so it has more "religious style issues" built in that are easier to test for. It still provides a great deal of useful output once configured correctly, and can help enforce a consistent coding standard.
Low startup cost and great benifits (Score:5, Insightful)
Could this be just another trend?
I don't worry about what's "trendy" or not. Just give the tool a shot in your group and see if it helps/works for you or not. If it does keep using it otherwise abandon it.
What kind of changes did the tools bring about in your testing cycle?
We use it _before_ the test cycle. We use it to catch mistakes such as "Whoops! Dereferenced a pointer there, my bad" before going into the test cycle.
And most importantly, did the results justify the expense?
Absolutely. The startup cost of adding static analysis for us was one developer for 1/2 a day to setup FindBugs to work on our CI build on a nightly basis to give us HTML reports. After that, the cost is our team lead to check the reports in the morning (he's an early riser) and create bug reports based on them to send to us. Some days there's no reports, other days (after a large check-in) it might be 5-10 and about an hour of his time.
It's best to view this tool as preventing bugs, synchronization issues, performance issues, you name it issues before going into the hands of testers. But, you can extend several of the tools like FindBugs to be able to add new static analysis test cases. So if a tester finds a common problem that effects the code you can go back and write a static analysis case for that, add it to the tool and the problem shouldn't reach the tester again.
Buyer (User) Beware (Score:3, Interesting)
Count also the forming of good programming habbits (Score:3, Insightful)
Many of never all (Score:5, Informative)
Short version:
There are real bugs, with huge consequences, that can be detected with static analysis.
The tools are easy to find and worth the price, depending on the customer base you have.
In the end, that cannot detect "all" bugs that could arise in the code.
Worth it?
Only you can decide, but after a few sessions learning why tools flag suspect code, if you take those suggest to heart, you will be a better coder.
Absolutely (Score:2, Interesting)
Linux kernel devs use sparse for static analysis (Score:5, Informative)
http://www.kernel.org/pub/software/devel/sparse/ [kernel.org]
Sparse has some features targeted at kernel development - for instance spotting mixing up kernel and user space pointers and a system of code annotations.
I haven't used it but I do see on the kernel mailing list that it regularly finds bugs.
Re: (Score:3, Funny)
WHOA... nice timing (Score:2, Interesting)
make up for language deficiencies (Score:4, Interesting)
I've never gotten anything useful out of these tools. Generally, encapsulating unsafe operations, assertions, unit testing, and using valgrind, seem both necessary and sufficient for reliably eliminating bugs in C++. And whenever I can, I simply use better languages.
Watch the differences! (Score:4, Interesting)
Something that we've found incredibly useful here and in past workplaces was to watch the _differences_ between Gimpel PC-Lint runs, rather than just the whole output.
The output for one of our projects, even with custom error suppression and a large number of "fixups" for lint, borders on 120MiB of text. But you can quickly reduce this to a "status report" consisting of statistics about the number of errors -- and with a line-number-aware diff tool, report just any new stuff of interest. It's easy to flag common categories of problems for your engine to raise these to the top of the notification e-mails.
Keeping all this data around (it's text, it compresses really well) allows you to mine it in the future. We've had several cases where Lint caught wind of something early on, but it was lost in the noise or a rush to get a milestone out -- when we find and fix it, we're able to quickly audit old lint reports both for when it was introduced and also if there are indicators that it's happening in other places.
And you can do some fun things like do analysis of types of warnings generated by author, etc -- play games with yourself to lower your lint "score" over time...
The big thing is keeping a bit of time for maintenance (not more than an hour a week, at this point) so that the signal/noise ratio of the diffs and stats reports that are mailed out stays high. Talking to your developers about what they like / don't like and tailoring the reports over time helps a lot -- and it's an opportunity to get some surreptitious programming language education done, too.
To summarize... (Score:3, Insightful)
Static analysis is part of the basics (Score:3, Insightful)
A double edged sword (Score:3, Informative)
In the more general sense, static analysis cannot find all bugs. There's a trivial proof: a program stuck in an infinite loop is a bug, but finding all such loops would solve the halting problem. Handling interrupts and the like also causes reasoning problems, as it's very hard, if not computationally intractable, to prove multi-threaded software is safe. So static analysis won't rid the embedded world of watchdog timers and other software failure recovery crap.
they are useful (Score:3, Insightful)
Halting problem bullshit (Score:4, Interesting)
Several posters have cited the "halting problem" as an issue. It's not.
First, the halting problem does not apply to deterministic systems with finite memory. In a deterministic system with finite memory, eventually you must repeat a state, or halt. So that disposes of the theoretical objection.
In practice, deciding halting isn't that hard. The general idea is that you have to find some "measure" of each loop which is an integer, gets smaller with each loop iteration, and never goes negative. If you can come up with a measure expression for which all those properties are true, you have proved termination. If you can't, the program is probably broken anyway. Yes, it's possible to write loops for which proof of termination is very hard. Few such programs are useful. I've actually encountered only one in a long career, the termination condition for the GJK algorithm for collision detection of convex polyhedra. That took months of work and consulting with a professor at Oxford.
The real problem with program verification is the C programming language. In C, the compiler has no clue what's going on with arrays, because of the "pointer=array" mistake. You can't even talk about the size of a non-fixed array in the language. This is the cause of most of the buffer overflows in the world. Every day, millions of computers crash and millions are penetrated by hostile code from this single bad design decision.
That's why I got out of program verification when C replaced Pascal. I used to do [acm.org] this stuff. [animats.com]
Good program verification systems have been written for Modula 3, Java, C#, and Verilog. For C, though, there just isn't enough information in the source to do it right. Commercial tools exist, but they all have holes in them.