How Your Compiler Can Compromise Application Security 470
jfruh writes "Most day-to-day programmers have only a general idea of how compilers transform human-readable code into the machine language that actually powers computers. In an attempt to streamline applications, many compilers actually remove code that it perceives to be undefined or unstable — and, as a research group at MIT has found, in doing so can make applications less secure. The good news is the researchers have developed a model and a static checker for identifying unstable code. Their checker is called STACK, and it currently works for checking C/C++ code. The idea is that it will warn programmers about unstable code in their applications, so they can fix it, rather than have the compiler simply leave it out. They also hope it will encourage compiler writers to rethink how they can optimize code in more secure ways. STACK was run against a number of systems written in C/C++ and it found 160 new bugs in the systems tested, including the Linux kernel (32 bugs found), Mozilla (3), Postgres (9) and Python (5). They also found that, of the 8,575 packages in the Debian Wheezy archive that contained C/C++ code, STACK detected at least one instance of unstable code in 3,471 of them, which, as the researchers write (PDF), 'suggests that unstable code is a widespread problem.'"
Inflammatory Subject (Score:5, Informative)
Since C/C++ is fairly liberal about allowing undefined behavior
No, it's not. The language forbids undefined behavior. If your program invokes undefined behavior, it is no longer well-formed C or C++.
Re:TFA does a poor job of defining what's happenin (Score:5, Informative)
An example of "unstable code":
char *a = malloc(sizeof(char));
*a = 5;
char *b = realloc(a, sizeof(char));
*b = 2;
if (a == b && *a != *b)
{
launchMissiles();
}
A cursory glance at this code suggests missiles will not be launched. With gcc, that's probably true at the moment. With clang, as I understand it, this is not true -missiles will be launched. The reason for this is that the spec says that the first argument of realloc becomes invalid after the call, therefore any use of that pointer has undefined behaviour. Clang takes advantage of this, and defines the behaviour of this to be that *a will not change after that point. Therefore it optimises if (a == b && *a != *b) into if (a == b && 5 != *b). This clearly then passes, and missiles get launched.
The truth here is that your compiler is not compromising application security – the code that relies on undefined behaviours is.
Re:News flash (Score:5, Informative)
I would also like to understand what's the definition of "unstable code".
Re:TFA does a poor job of defining what's happenin (Score:5, Informative)
What is "unstable code" and how can a compiler leave it out?
The article is actually using that as an abbreviation for what they're calling "optimization-unstable code", or code that is included at some specified compiler optimization levels, but discarded at higher levels. Basically they think it's unstable due to being included or not randomly, not because the code itself necessarily results in random behaviour.
PC Lint anyone? (Score:4, Informative)
Back in the day when I was doing C++ work, I used a product called PC Lint (http://www.gimpel.com/html/pcl.htm) that did basically the same thing STACK does. Static Analysis of code to find errors such as referencing NULL pointers, buffer over flows, etc... Maybe they should teach History at MIT first...
Yes compilers really do this (Score:3, Informative)
Yes it leads to real bugs - Brad Spengler uncovered one of these issues in the Linux kernel in 2009 [lwn.net] and it led to the kernel using the -fno-delete-null-pointer-checks gcc flag to disable the spec correct "optimisation".
Re:TFA does a poor job of defining what's happenin (Score:5, Informative)
Another, more common example of code optimizations causing security problems is this pattern:
int a = [some value obtained externally];
// integer overflow occurred ...
int b = a + 2;
if (b < a) {
}
The C spec says that signed integer overflow is undefined. If a compiler does no optimization, this works. However, it is technically legal for the compiler to rightfully conclude that two more than any number is always larger than that number, and optimize out the entire "if" statement and everything inside it.
For proper safety, you must write this as:
int a = [some value obtained externally];
// integer overflow will occur ...
if (INT_MAX - a < 2) {
}
int b = a + 2;
Re:TFA does a poor job of defining what's happenin (Score:5, Informative)
The TFA links to the actual paper. Maybe you should read that.
Towards Optimization-Safe Systems:Analyzing the Impact of Undefined Behavior [mit.edu]
Re:TFA does a poor job of defining what's happenin (Score:5, Informative)
"What every C programmer should know about undefined behaviour" (part 3 [llvm.org], see links for first 2 parts).
For example, overflows of unsigned values is undefined behaviour in the C standard. Compilers can make decisions like using an instruction that traps on overflow if it would execute faster, or if that is the only operator available. Since overflowing might trap, and thus cause undefined behaviour, the compiler may assume that the programmer didn't intend for that to ever happen. Therefore this test will always evaluate to true, this code block is dead and can be eliminated.
This is why there are a number of compilation optimisations that gcc can perform, but which are disabled when building the linux kernel. With those optimisations, almost every memory address overflow test would be eliminated.
The paper gives examples (Score:5, Informative)
The article doesn't summarize this very well, but the paper (second link) provides a couple examples. First up:
They then give another example, this time from the Linux kernel:
The basic issue here is that optimizers are making aggressive inferences from the code based on the assumption of standards-compliance. Programmers, meanwhile, are writing code that sometimes violates the C standard, particularly in corner cases. Many of these seem to be attempts at machine-specific optimization, such as this "clever" trick from Postgres for checking whether an integer is the most negative number possible:
The remainder of the paper goes into the gory Comp Sci details and discusses their model for detecting unstable code, which they implemented in LLVM. Of particular interest is the table on page 9, which lists the number of unstable code fragments found in a variety of software packages, including exciting ones like Kerberos.
Re:TFA does a poor job of defining what's happenin (Score:2, Informative)
The first mistake was using signed integers. unsigned integers always have well-defined overflow (modulo semantics), which means it's easier to construct safe conditionals
Not in C and C++ they don't. The compiler is allowed to perform that optimization with either signed or unsigned integers.
Know your C (Score:4, Informative)
Re:News flash (Score:5, Informative)
Didn't RTFA because this is /., but I'd guess that it's code that works now but is fragile under a change of compiler, compiler version, optimization level, or platform.
Yes, you didn't RTFA, because your definition actually makes sense. TFA defines "unstable code" as code with undefined behavior. TFA also claims that many compilers simply DELETE such code. I have never seen a compiler that does that, and I seriously doubt if is really common. Does anyone know of a single compiler that does this? Or is TFA just completely full of crap (as I strongly suspect)?
Re:News flash (Score:5, Informative)
You probably haven't used any desktop compilers.
Just a sampling:
Re:News flash (Score:4, Informative)
I have never seen a compiler that does that, and I seriously doubt if is really common. Does anyone know of a single compiler that does this?
The only compilers I know of that definitely do this are GCC, LLVM, ICC, Open64, ARMCC, and XLC, but others probably do too. Compilers use undefined behaviour to propagate unreachable state and aggressively trim code paths. There's a fun case in ARM's compiler, where you write something like this:
The entire loop is optimised away to an infinite loop. Why? Because accesses to array elements after the end of the array are undefined. This means that, when you write x[i] then either i is in the range 0-4 (inclusive), or you are hitting undefined behaviour. Because the compiler can do anything it wants in cases of undefined behaviour, it is free to assume that they never occur. Therefore, it assumes that, at the end of the loop, i is always less than 5. Therefore, i++ is always less than 10, and therefore the loop will never terminate. Therefore, since the body of the loop has no side effects, it can be elided. Therefore, the declarations of x and y are never read from in anything with side effects and so can be elided. Therefore, the entire function becomes a single branch instruction that just jumps back to itself.
If your code relies on undefined behaviour, then it's broken. A compiler is entirely free to do whatever it wants in the cases where the behaviour is undefined. Checking for undefined behaviour statically is very hard, however (consider trying to check for correct use of the restrict keyword - you need to do accurate alias analysis on the entire program) and so compilers won't warn you in all cases. Often, the undefined behaviour is only apparent after inlining, at which point it's difficult to tell what the source of the problem was.