Do Static Source Code Analysis Tools Really Work? 345
jlunavtgrad writes "I recently attended an embedded engineering conference and was surprised at how many vendors were selling tools to analyze source code and scan for bugs, without ever running the code. These static software analysis tools claim they can catch NULL pointer dereferences, buffer overflow vulnerabilities, race conditions and memory leaks. Ive heard of Lint and its limitations, but it seems that this newer generation of tools could change the face of software development. Or, could this be just another trend? Has anyone in the Slashdot community used similar tools on their code? What kind of changes did the tools bring about in your testing cycle? And most importantly, did the results justify the expense?"
Yes. (Score:4, Insightful)
Just like compiler warnings... (Score:5, Insightful)
It has found some real bugs that are hard to generate a testcase for. It has also found a lot of things that aren't bugs, just like -Wall can. Since I work in the virtual memory manager, a lot more of our bugs can be found just by booting, compared to other domains, so we didn't get a lot of new bugs when we started using static analysis. But even one bug prevented can be work multiple millions of dollars.
My experience is that, just like enabling compiler warnings, any way you have to find a bug before it gets to a customer is worth it.
OSS usage (Score:5, Insightful)
Re:Yes. (Score:2, Insightful)
Not that I actually read anything about the SSL "fix".
Re:In Short, Yes (Score:1, Insightful)
Yes, they work. (Score:5, Insightful)
Static analyzers will catch the stupid things - edge cases that fail to initialize a var, but then lead straight to de-referencing it; memory leaks on edge-case code paths, etc. that shouldn't happen but often do, and get in the way of find real bugs in your program logic.
The more things change... (Score:4, Insightful)
Yes, static code analysis can help improve a team's ability to deliver a high-quality product, if it is embraced by management and its use is enforced. No, it will not change the face of software development, nor will it turn crappy code into good code or lame programmers into geniuses. At best, when engineers and management agree this is a useful tool, it can do almost all the grunt work of code cleanup by showing exactly where problem code is and suggesting extremely localized fixes. At worst, it will wind up being a half-assed code formatter since nobody can agree on whether the effort is necessary.
Just like all good software-engineering questions, the answer is 'it depends'.
Re:Yes. (Score:3, Insightful)
While the parent makes a good point that results are not always easy to understand or fix - since the original post is about static vs run-time analysis tools, it's good to understand that they each have their problems.
Re:In Short, Yes (Score:5, Insightful)
Low startup cost and great benifits (Score:5, Insightful)
Could this be just another trend?
I don't worry about what's "trendy" or not. Just give the tool a shot in your group and see if it helps/works for you or not. If it does keep using it otherwise abandon it.
What kind of changes did the tools bring about in your testing cycle?
We use it _before_ the test cycle. We use it to catch mistakes such as "Whoops! Dereferenced a pointer there, my bad" before going into the test cycle.
And most importantly, did the results justify the expense?
Absolutely. The startup cost of adding static analysis for us was one developer for 1/2 a day to setup FindBugs to work on our CI build on a nightly basis to give us HTML reports. After that, the cost is our team lead to check the reports in the morning (he's an early riser) and create bug reports based on them to send to us. Some days there's no reports, other days (after a large check-in) it might be 5-10 and about an hour of his time.
It's best to view this tool as preventing bugs, synchronization issues, performance issues, you name it issues before going into the hands of testers. But, you can extend several of the tools like FindBugs to be able to add new static analysis test cases. So if a tester finds a common problem that effects the code you can go back and write a static analysis case for that, add it to the tool and the problem shouldn't reach the tester again.
Re:Of course they can work (Score:3, Insightful)
Count also the forming of good programming habbits (Score:3, Insightful)
Re:Trends or Crutches? (Score:4, Insightful)
Re:Trends or Crutches? (Score:2, Insightful)
Look, people make mistakes, and regardless of how good a programmer you are, there is a limit to the amount of state you can hold in your head, and you WILL dereference a NULL pointer, or create a reference loop, at some point in your career.
Using a computer to catch these errors is just another flavor of metaprogramming. Get over it, and go be more productive with these tools, instead of whining for the days when you coded on bare metal with your bare hands and you liked it.
Arrgh.
Re:Trends or Crutches? (Score:5, Insightful)
I also don't think new languages help bad programmers much. Bad code is still bad code so now instead of crashing it will just memory leak or just not work right.
On a software project I worked on before our competition spent two years and two million dollars did their code in visual basic and MSSQL and they abandoned their effort when no matter what hardware they threw at it they couldn't get their software to handle more than 400 concurrent users. We did our project in C and with a team for 4 built something in about a year that handled 1200 users on a quad CPU P III 400mhz Compaq. Even when another competitor posed as a client and borrowed some of my ideas (they added a comms layer instead of using the SQL server for communication) they still required a whole rack of machines to do what we did with one out of badly out of date test machine.
C is a fine tool if you know how to use it so I doubt it will go away any time soon.
To summarize... (Score:3, Insightful)
Re:who proved Astrée ...? (Score:3, Insightful)
Is this is a proof or do some mistakenly think they're safe?
Who "proved" Astree to be error free in the first place?!
Static analysis is part of the basics (Score:3, Insightful)
Re:signal to noise (Score:4, Insightful)
That code is simply in poor taste, even if it works. What PC-Lint, and good taste, say you should do is change the code to "if( (x=y) != 0 ) {}". This will satisfy PC-Lint, and also makes your intention very clear to the next programmer who comes along. And, best of all, it doesn't generate a single byte of extra code, because you've only made explicit what the compiler was going to do anyway.
Re:Change bug source (Score:2, Insightful)
Of course best coders still make mistakes but lousy coders make a lot of them, belive me "I've seen things you people wouldn't belive". Every coder makes mistakes but some coders or maybe so called coders which in fact happens to have a learn xxx in 21 days course makes lots of mistakes.
Re:In short, YMMV (Score:5, Insightful)
Another line to use. (Score:4, Insightful)
Him: "Yes, you need to actually look at them and see if they're bugs or not."
Me: "Then what sense does it make to generate charts based on wholesale counting
entities which may, or may not be bugs?"
Him: "Well, you can use the charts to see, say, a trend that you have less
of them over time, so the project is getting better."
Me: "But they may or may not be actual bugs. How do you know if this week's
mix has more or less actual bugs than last weeks, regardless of what the
total there is?"
Him: "Well, yes, you need to actually look at them in turn to see which are actual bugs."
Me: "But that's not what the tool counts. It counts a total which includes an
unknown, and likely majority, number of false positives."
Him: "Well, yes."
Me: "So what use is that kind of a chart then?"
Him: "Well, you can get a line or bar graph that shows how much progress
is made in removing them."
Your next line is:
Me: "So you're selling us a tool that generates a lot of false warnings
and a measurement on how much unnecessary extra work we've done to
eliminate the false warnings. Wouldn't it make more sense not to use
the tool in the first place and spend that time actually fixing real bugs?"
To work this question must be asked with the near-hypnotized manager watching.
Re:In short, YMMV (Score:2, Insightful)
Re:Very useful in .Net (Score:3, Insightful)
Once you develop with Resharper, you really can't go back to using VS without it... it's like coding with stone knives and bear skins.
false positives vs. false negatives (Score:2, Insightful)
Re:In Short, Yes (Score:5, Insightful)
The only real significance of the halting problem is to demonstrate that there can be some pretty absurd programs out there. It is not an indictment of static analyses. Nor is it an excuse to have less than total confidence in the correctness of your code.
Re:In Short, Yes (Score:5, Insightful)
Would it not make sense to run this tool to catch these types of errors before wasting everyones time in a code review?
By the time you get to code review and test, you should be catching logic errors, not stupid syntactical and poor code style ones. If the tool helps a developer clean up and catch the obvious stuff, then testing can be much more productive catching the real problems.
Basically if the tool helps reduce errors then it is useful. Same comment goes for code complexity checkers. No tool will catch everything though, but then again you shouldn't be depending on it to.
they are useful (Score:3, Insightful)
Re:In Short, Yes (Score:4, Insightful)
Re:In Short, Yes (Score:3, Insightful)
By the time you get to code review and test, you should be catching logic errors, not stupid syntactical and poor code style ones. If the tool helps a developer clean up and catch the obvious stuff, then testing can be much more productive catching the real problems.
Re:In Short, Yes (Score:4, Insightful)
That would be one of the absurd programs the GP was slamming. But a program where the break condition depends on, say, the user's input isn't amenable to static analysis and is perfectly reasonable and useful.
But you don't need to be perfect to be decent. A lot of static analysis can't tell what will happen, but can warn you if some code is unreachable, if no path will ever free memory, if a loop runs off the end of a memory allocation, etc.
The Linux kernel uses a lot of static checking tools to pretty great effect (sparse, for one, is extremely helpful, and the Stanford checker found a lot of problems too).
Re:Another line to use. (Score:4, Insightful)
You dont just run the tool over and over again and never adapt it to your code.
If it produces a bunch of false positives, then you go in and modify the rules to not generate those false positives.
Thats half the point of something like this, you need to tune it to your project.
The flip side is that if you see some devs over and over making the same kind of mistake, well you can write a new rule in it to flag that kind of thing.
If you have an endless number of false positives, that doesnt ever go down, then you are either:
1. Not using the tool correctly.
or
2. Not working on a project that is amenable to this tool.
IME, the vast majority of time its #1. Now you may find that for certain small or narrowly scoped projects, or those worked on by 2 super-gurus, that the overhead for learning and tuning the tool for that project isnt worth it. But thats something you'd have to find out yourself, and it differs from project to project.
Unless you become lazy (Score:3, Insightful)
Unless, engineers begin to rely on them! If I stop thinking about referencing null pointers because my tool catches 90% of them, I haven't gained a thing.
Re:Another line to use. (Score:2, Insightful)
Maybe that would be enough to convince him that a more warnings from the tool does not necessarily mean he'll keep his job.