Bounds Checking for Open Source Code? 90
roarl asks: "Is anyone working on an Open Source bounds checking system? (A system that checks a program at runtime for array out of bounds access, reading uninitialized memory, memory leaks and so on). I've been using BoundsChecker for some time and believe me, there are situations where you know you are going to spend hours debugging unless you let BoundsChecker sort it out for you. But it annoys me that I have to transfer (and sometimes port) the buggy program to Windows each time. I'd much rather stay in Linux.
Insure works on Linux. I haven't tried Insure for some time, but last time I tried I wasn't especially impressed. Purify seems still not to support Linux, but on other Unix platforms it works great. The problem with all of these products is that they are so da*n expensive. So it makes me wonder, are all Open Source programmers doing without them? If so, what can we expect of the quality of Open Source developed programs? If not, is there a free alternative?"
A simple answer to a simple question... (Score:3, Funny)
Re:A simple answer to a simple question... (Score:2)
Re:A simple answer to a simple question... (Score:2, Interesting)
Of course, doing so doesn't do away with the problem entirely, it simply moves the problem up a level - how does one handle bounds checking when debugging a language interpreter?
Re:A simple answer to a simple question... (Score:2)
Re:A simple answer to a simple question... (Score:2)
I can't speak for any other implementation of Smalltalk or Common Lisp, but Squeak Smalltalk does bounds checking at runtime. It does it with in the Smalltalk object-space via regular Smalltalk methods and not in the VM.
Re:A simple answer to a simple question... (Score:1)
Re:A simple answer to a simple question... (Score:1)
Re:A simple answer to a simple question... (Score:1, Flamebait)
<duck>
Re:A simple answer to a simple question... (Score:1)
While I don't agree with the logic, I understand it.
BoundsChecker works with C++ & Delphi, Insure++ with C/C++, Purify is yet again C/C++. Translated in a round about kind of way the original poster is asking "How can I get bounds checking for C++ with free software?". The response marked flamebait could be translated as 'use LISP' which easily could be considered flambait for a C++ developer with a bloodlust for language holy wars.
Electric Fence (Score:5, Informative)
It's very "non-invasive" -- all you have to do to use it is link against it, and maybe set a few environment variables.
Re:Electric Fence (Score:2)
Looks like you should have used it on that sentence
Re:Electric Fence (Score:2)
Something in my program was modifying free'd memory. To detect this, efence doesn't really free the memory (EF_PROTECT_FREE), which makes it consume a huge amount of space. Especially if your program does a lot of memory allocating and freeing, like mine did, the system runs out of memory and swaps until the cows come home.
I finally found my problem by changing my frees to clear the memory before actually freeing it - then my usual sanity checks would find the bad pointer.
from the manpage [grm.hia.no]:
Re:Electric Fence (Score:2)
On the other hand, ElectricFence is very unintrusive. For small to medium-sized programs and/or libraries, you can use it in every compile, saving you the trouble of fixing those bugs later.
I would recommend them both. Always use ElectricFence, then use Purify to find those additional nasty bugs...
Lots of almost-complete solutions (Score:5, Informative)
Of the top of my head, and with the help of my bookmarks:
I personally had high hopes for the GCC BP project. If you feel like doing something that will earn you the admiration of millions, finish that code up. :-)
Re:Lots of almost-complete solutions (Score:1)
A general case (Score:5, Interesting)
Re:A general case (Score:3, Informative)
That's a good chunk of it, but there are several cases of acces to initialized (and alloc'ed memory) that should also be detected:
- pointers to stale memory. Example: malloc, initialize, free, malloc again and get some reused memory that was left initialized before the free, and then attempted use of that data. (calloc zeros the memory, malloc doesn't)
- pointers exceeding array bounds. Example: int x[100][100], reading array element x[10][150] doesn't cause a segfault (but x[150][10] does)
- unexpected pointer alias (devious and borders between just a regular bug and a bounds-checkable bug). The function doesn't expect the two pointers passed to it to point to the same area of memory (example memcpy pointers can't overlap, memmove pointer can). Incidently, this assumption is usually a toggleable (& dangerous if you're not careful) optimization and can cause the compiler to generate 'bad' code - more limited languages (eg. Fortran 77, but not Fortran 90) that don't have pointers can be more agressively optimized! when I've got a function that could choke on this kind of thing, I usually code in a bunch of asserts to check for this case and raise a flag.
Re:A general case (Score:2, Informative)
No. Try this code and see if it raises an error with Valgrind (spoiler: I just did and it doesn't):
P.S. The comment before this looks like crap because slashdot doesn't use the <pre> HTML tag. Ooops.Re:A general case (Score:1)
Bounds checking gcc compiler (Score:5, Informative)
Valgrind (Score:2, Informative)
memprof? (Score:2, Interesting)
Have you looked into Immunix and StackGuard? (Score:2, Interesting)
While it may not be EXACTLY what you want, it may be MORE....
use a better language... (Score:1, Flamebait)
With the 80188, intel actually introduced the bound instruction, which compares a register against pair of upper/lower bounds an produce interrupt 5 if the register is too high or low. Motorola's 680x0 CHK instruction does the same.
It would be useful if gcc produced debugging code to do static array bounds checking.
Good Question - Some Answers (Score:4, Informative)
I've found that ccmalloc helped me to find a lot of problems in C code. The output is more verbose than Purify, but it showed me where some real problems lay with my code.
Check out this site [colorado.edu] by Ben Zorn on free and other tools for this.
Insure++ (Score:3, Interesting)
It also does very detailed tracking of memory leaks, but can get a little confused when you store the last referencing pointer in a hashtable.
I think other than its somewhat clunky UI, price is the big killer. it takes a pretty fast machine to be able to use it much and it has a large up front cost, plus maintainence(upgrades and support) fee. It's really too bad they don't have a program in place with someone like sourceforge to let people use Insure++ on the test machines because that would not only be great advertising for them, but also could really help the open source developers too.
Change languages. (Score:3, Insightful)
Use a different language. There are some things which C is appropriate for, but one of the things it's categorically not called for is when you have concerns about buffer-overflow conditions [*]. If this is a purely open-source, noncommercial project, do yourself and your career a favor: learn another language (one which doesn't have these sorts of problems) and write your app in that instead. You'll learn more, and you won't have to spend a dime on Purify or whatnot. If you go this route, I'd suggest Scheme; it's a beautiful LISP derivative.
If this is a commercial project, ask Management how married they are to C. In the overwhelming majority of cases, you can quietly substitute C++ without affecting the APIs one bit. Just wrap the external APIs in extern "C" and, inside the code, use C++'s beautiful vector instead of C-style arrays. Sure, you'll take a minor performance hit, but the increase in reliability will be well worth it.
Anyway, to try and give a (weak) answer to your question--instead of slapping a Band-Aid on the festering wound that is C memory management, you might want to think about doing away with the festering wound altogether. Use the right tool for the job--if C really is the right tool for the job, then fine, may God have mercy on your code. But if there are other, better, tools available... use them instead.
[*] OpenBSD manages to do pretty well with a C kernel, but that's because they're certifiably insane. It also impacts their dev cycle; they spend a great deal of time avoiding the pitfalls of C, so much so that it affects how much time they can devote to new development.
Re:Change languages. (Score:2, Offtopic)
I agree with everything you said. C has it's place. It's small and dark and should be avoided by most.
Scheme is a sexy, sexy language. However, why not just use straight ANSI Common LISP. That's my preference for one main reason, CLOS. Scheme has only several mediocre implementations of "CLOS-like" systems. Nothing really on par with CLOS that I can find.
For the uninitiated (you poor, poor people
Anyway, just my suggestion. Unless, of course, someone can suggest a good CLOS system for Scheme.
Regardless, have a great day guys,
Justin Dubs
Re:Change languages. (Score:2)
C has it's place. It's small and dark and should be avoided by most.
I don't know if I'd go this far. On one of my pages (here [inav.net], or http://soli.inav.net/~rjhansen/c_relevance.html for the goatse.cx averse) I've got an essay--which I originally wrote intending it to be a response to a Web editorial I saw blasting every language that wasn't Java--which may be germane to the discussion here.
Short version: C has its place. Yeah, the place is small and dark and should be avoided whenever possible. But sometimes we don't get a choice of whether or not to avoid it, and when we're trapped in that small, dark place, C is your salvation. So I'm not going to knock C--but I will say that I generally avoid it whenever possible.
However, why not just use straight ANSI Common LISP
Didn't recommend it because I don't know Common LISP.
Scheme, on the other hand, has a very lightweight standard. It's easy to read, easy to understand. Sure, it probably misses out on some cool things that are in Common LISP, but I've yet to find an instance where Scheme has let me down.
Really--the only reason why I didn't recommend Common LISP was because I have a moral aversion to recommending languages I don't understand. If you like it, though, by all means, get down with your bad functional-programming self.
Re:Change languages. (Score:2)
About ANSI Common LISP: In terms of syntax and functionality, there really isn't that much too it. It's beauty is it's simplicity. However, in terms of the available methods and packages, it is a bit of a beast. I'm still wading through it myself.
Honestly, to a certain extent I prefer Scheme. It's a bit more consistant and definitly more simple. The only thing really holding me back is, like I said in my last post, the lack of an Object system of the same quality as CLOS.
If I can find an OO system for scheme that has method-combinations and supports functionality like
Do you have any recommendations?
Anyway. Thanks for the reply. Have a good night,
Justin Dubs
Re:Scheme - - (Score:2)
Justin Dubs
Re:Change languages. (Score:1)
Re:Change languages. (Score:2)
While the ANSI Common Lisp standard is comparably sized to the C++ standard, there is one important difference - it's actually really easy to read. :-) I think most Lisp programmers start very early in their development to start referring to and using the standard on a regular basis. I'm less sure about C++ programmers.
Re:Change languages. (Score:2)
I can still generally compile and run five year old (since last revision) C programs without too much trouble. Frankly, five year old C++ programs have a habit of failing compilation on the first file -- too much change in the compilers.
I don't think that simply moving to a functional languages is an option for most people. I and others dislike using functional languages for larger programs.
As for lisp: first-order functions feel *right*, yes. They also end up causing code that is absolute hell to debug. Trying to find where the code for the function is that's being called in the current function can get to be really aggravating when you're working over someone else's code. I was puzzling my way through OpenLDAP code today, and function pointers alone make it frusterating to see what's going in in a program. When a language has good first-order functions (meaning) the programmers use them all over), and particularly if we throw in continuations, it's rough on the poor maintainers. This is one thing that C++ did right -- templated code is a Good Thing for maintainers, much easier to read than code that uses function pointers or first-order functions.
OpenBSD manages to do pretty well with a C kernel
*snicker* Okay, find me a high performance Common LISP kernel.
Re:Change languages. (Score:2)
Hardly surprising given that five years ago there wasn't even an ANSI/ISO C++ standard. If you're using code that doesn't conform to the standard, it's not the language's fault if it fails to compile.
*snicker* Okay, find me a high performance Common LISP kernel.
I'm not a Common LISP hacker, sorry. But for non-C kernels, try BeOS (C++), try Oberon (Modula), try Plan 9... they vary from acceptable to excellent, and don't use C.
But if you really want a high-performance LISP kernel, I'd suggest looking at an (old) LISP Machine. Those babies were pretty sweet for the day.
Re:Change languages. (Score:1)
"But for non-C kernels, try BeOS (C++)"
Sorry, but there's not a single line of C++ in the BeOS kernel. Their motto was always: "No C++ in kernel code".
Re:Change languages. (Score:1)
Sorry, but there's not a single line of C++ in the BeOS kernel. Their motto was always: "No C++ in kernel code".
Well, that's too bad. Did you notice they went bankrupt too?
Maybe they should have. Or maybe not, since C++ isn't such a radical change from C, you still get the sticky stuff while only adding hairy stuff...
Re:Change languages. (Score:2)
Funny, I would tend to say exactly the opposite. Aside from syntax issues and verbose compiler errors, template'd code is hard to step through in a debugger, it doesn't work too well with separate compilation, and most C++ compilers are quite buggy, biting your ass if you try something too fancy.
That doesn't mean I don't find C++ templates useful or interesting. Eventually these issues will get sorted out, maybe with a new language, or maybe just with new and better tools. But so far, they have rightfully proven themselves as a nightmare for the maintenance programmer (at least in my book).
Re:Change languages. (Score:2)
Re:Change languages. (Score:3, Informative)
vector does do bounds checking, but since it results in a (minor) performance penalty, operator[] (the normal method of vector access) is unchecked. You want bounds checking, use at()
95% of the time, it is simply bad software engineering practice to use operator[] within a vector . The only time it's really acceptable practice is when (a) you're operating under severe performance limitations and (b) you have some other guarantee you won't hit an out-of-bounds condition.
Re:Change languages. (Score:2)
Re:Change languages. (Score:2)
The syntax you use isn't. The syntax which I commonly use--and which is commonly used by professional C++ coders--is. Bounds-checking is one of the biggest wins of the vector ; discarding it, just because you can't be bothered to learn how to use the STL properly, seems exceptionally rash to me.
But anyway I see no difference than somebody saying "C is safe, just use this special at(pointer,index) function instead of [] and you will be fine"
There is no difference, save for this: a bounds-checked vector is part of the C++ standard library (via the STL) and is available on every C++ platform that's worth coding for. Even MSVC++'s shoddy STL implementation supports it.
Bounds-checked array access is not part of the C89/C90 spec (dunno about C99), and thus, if you want it, you have to do what the original poster does--bleed for it, via many different vendors.
The original advice I gave is still the advice I'm giving now. Use a different language. If bounds-checking is what you need, then use a language with support for bounds-checking built into the language.
C++ has this. C doesn't.
g77 (GNU Fortran) has it built-in... (Score:1)
you would expect: program checks bounds
on any array access. (Used it a couple
of month ago to track a really nasty bug
in some ancient code).
I doubt this would be easily portable to
the C/C++ side of GCC, because in C you have
miriad ways to access the same memory location
(via different pointers).
Of course, already mentioned Electric Fence
is a really nice tool to debug malloc() problems
(but not other types of memory overruns, like
overrunning a static array).
Linker can put a 0xDEADBEEF after all arrays and
verify that it is the same on the program exit,
might help some...
Paul B.
The solution to most of your debugging needs! (Score:5, Informative)
preconditions, postconditions, and invariants are the best approach to avoiding such errors. Will a bounds-checker detect if you access an element that is out-of-bounds in a view (subarray) of a larger array? Also, if you are developing a library, using assertions will also greatly assist any end-users who are not using a bounds-checking tool.
Re:The solution to most of your debugging needs! (Score:4, Informative)
This is particularly important in open source projects. Bob writes code that produces and uses a data structure and makes some assumptions about it. Now John makes a few improvements to the program, has no idea what assumptions Bob (who lives a continent away) has made, and modifies the data structures in a way that breaks Bob's code. John doesn't know what single change broke Bob's code, and Bob doesn't know all the things that John did that might affect his data structures. Liberal use of assert() will cost you nothing at runtime (compile with -DNDEBUG), takes only a tiny bit of extra typing, and is one of the very best weapons against program-spanning nasty errors.
But is assert() portable? (Score:1)
Re:But is assert() portable? (Score:1)
Re:But is assert() portable? (Score:2)
The assert that the info page is talking about is a command line argument, a weird gcc-ism. This can be safely ignored. If you're really interested, it's kind of a -D__OSType__, but umm, different. Try gcc -v -E - and look for all the -A..s if you're really interested to see what's set. Ignore this, any code that uses this should be shredded and the coder shot. It doesn't give any advantage, and it locks you to gcc. And while you're shooting the coder, shoot the gcc guy who called it assert, just adds to confusion unnecessarily.
The assert() that's your best coding friend is a debugging thing. It allows you to check important conditions that could lead to bugs. It's a macro that's turned off by -DNDEBUG, so on your release version, one compile switch and no check or runtime penalty.
Quickie example:
#include <assert.h>
int getSomethingFromArray(struct array *ptr, int elem)
{
assert(ptr != NULL);
assert(elem >= 0 && elem arraySize);
return ptr->elems[elem];
}
Contrived example, but you get the point. ptr should never be NULL, if it is, somethings wrong. If you violate the array bounds, something else is wrong. So in either case, the associated assert() blows up, core dumps, and you see from the core file where it trashed. Once you debug it and get ready to ship, define -DNDEBUG and the assert()s become empty statements, and the compiler eliminates them. It's pretty cool.
assert() has it's rules on how and where to use, and any good C book will tell you these rules. You should also be comfortable with looking at core dumps with your debugger, at least stack traces to see where it crashed. if you get a core, try:
gdb progname core
and once you're in there, type where.
And as a general rule, if it seems that something is a lot of work, chances are someone else thought that, and either has written the stuff for you, or there's a better way. Us programmers is lazy. In this case, the NDEBUG define strips everything, no need for sed.
May you code in interesting times.
STL (Score:3, Insightful)
use malloc... (Score:2)
You can also set MALLOC_CHECK_ to 0 to get a malloc like Windows and BSD that's safe against double free's and most off by one errors. Not useful for debugging, but can sometimes make a buggy closed source program run without dumping core. It's slower of course, but...
Re:use malloc... (Score:2)
Setting MALLOC_CHECK_ to zero makes it act the same but not abort or print messages. The weird end result is that malloc is safer because a side-effect of the checking is that multiple frees and writing off the end of a buffer. I think it may actually do the tests and then decide not to report the results, so it certainly is slower.
Creatign a Bounds checker like tool (Score:1)
Re:Creatign a Bounds checker like tool (Score:1)
splint.org (Score:1)
wow,
I've had splint.org [splint.org] in my sig for a while now. I think it's one of those projects that needs more attention. This project used to be called lclint but got renamed to splint.
There are lots of papers out there on static checkers. One good intro paper is at http://www.research.ibm.com/people/h/hind/paste01. ps [ibm.com]. This would give you a nice intro on pointer analysis, a sub topic in static analysis..
Re:splint.org (Score:2)
But that might be because I only use C for nasty low-level code. E.g. to implement a reference-counted pointer scheme, I would have to fight with lint on all kinds of "who owns this pointer" issues. And this would appear anywhere I used them in the program. I think this corresponds mostly to the third paragraph in section 4.5 of the linked to paper.
I don't think static checking is useless, in fact I am very interested in the issue, and I'd love to be proven wrong!
Do you know of some examples where lclint/plint has been used with reasonable effort to find interesting bugs (bugs that wouldn't easily show up in non-static checkers) in complex pointer-handling code (i.e. something akin to the Boehm GC, or equally awful)
Try YAMD (Score:1)
Extensive list (Score:1)
Some are commercial and some are freeware/public domain/whatever.
GNAT (Score:1)
Re: GNAT (Score:2)
> GNAT [gnat.com], an open source Ada-95 compiler, support those checks.
The language also supports bug-resistant programming, e.g. -
Use scripting and VM languages, where possible. (Score:2, Insightful)
JSP & PHP are great for web sites. Perl & Python are great replacements for shell scripting, as well as most general-purpose stuff. LISP is great, if you're a purist. Java has its uses, to be sure.
The point of all of this is that built-in memory allocation, built-in garbage collection, and a lack of pointers is A Very Good Thing(tm). You basically don't have bounds-checking problems. In general, scripting and VM code won't break due to memory leaks and the like.
Interpreted code, in particular, is highly reliable. As an example, Perl code, if well written, which means it traps all errors, etc., is rock solid. Python, I am told, is even more solid. C, on the other hand, is highly unstable. C++ is almost as bad, and VB code on M$ boxen, breaks all the time, as well.
These days, hardware, memory, and disk are SO cheap and fast that you *should* recoup almost all program performance costs associated in interpreted/scripting/vm languages in four ways: 1) faster, easier coding; 2) easier debugging; 3) more portability; 4) more reliable software. Of course, in the case of Perl, you've got to force good style upon yourself, so items 1 and 2 may not apply, some times....
I'd also avoid stuff that puts too much faith in the stability of dynamically linkable code. DLL's and COM objects in M$ land is a huge problem. It goes without saying that Linux's
C, C++, etc., have their uses, to be sure. People use C where it doesn't belong. It belongs in writing operating systems, interfaces, drivers, etc., but it isn't, for most intents and purposes, a good business language. C++ is better, but Java and *modern* scripting languages are even better, most of the time.
If we're going after "The Best Tool for the Job," I see that you need to balance among several different tensions: a semi-popular language (so you can get help, when needed), one that's well-documented (good books at your local book store and many web sites that cover it, for example), is highly portable (the larger, older, more successful, and more mature a project gets, the chances it'll get ported increase), does the job with a minimum amount of effort (planning, coding, testing, debugging, and documentation all go into this), won't crash unexpectedly (like C/C++/VB/assembly), runs quickly enough (with modern hardware and preemptive multitasking/multiprocessing operating systems, this isn't a bug issue, most of the time), is easy to fix/alter (most scripting languages don't have a compile step, so the code is the executable, ergo, it's usually easier to fix), and is general purpose (not specialized).
Just as important, you need to avoid the tensions of "too many" or "too few" languages for a project. Having 1 language that tries to force the big square peg in the small round hole is just as bad as 10 languages in a small to medium-sized project. Working on a team illustrates this even more. While SQL, OS shells, XML, HTML, and JavaScript are all exceptions to the rule (they're usually the only/main way to accomplish a specific task), having one person writing in C, another in C++, one in Perl, one in Python, another in VB, and still another in Java is usually a ticket to disaster, for most projects.
My personal rule of thumb: Perl for batch processing, utilities, command-line scripts, and most data massaging; PHP for small to medium web apps; JSP for larger web apps, or those created on teams of about 4 or more people; Java for most apps, especially GUIs; C/C++/VB for really specialized stuff; and what ever else, if you've got to support old code (new code from the above list).
Re:Use scripting and VM languages, where possible. (Score:3, Interesting)
Develop in PERL with the flexibilty of the interpeter and all the garbage collection and neato stuff built-in.
When you hit a "stable" release version, use the O module to compule the code. either to Perl byte code for faster loading, or to one of two versions of C code. One just spits out calls to the perl/system libraries, the other is standard C code.
glibc is your friend. (Score:2)
MALLOC_CHECK_
If you set the environment variable MALLOC_CHECK_ before running a program, glibc uses a slow but thorough variant of malloc to do some checking on buffer overruns, double-frees, etc... Setting MALLOC_CHECK_ to 0 makes it ignore problems, 1 causes it to print a diagnostic to stderr, and 2 causes it to print a diagnostic and abort(). All of this is the glibc malloc(3) man page.
MALLOC_TRACE and mtrace()
If you "#include " in your source, you can call mtrace(3) at some point in your code. This function looks for the environment variable MALLOC_TRACE which it then logs all malloc(3)s, free(3)s, realloc(3)s and calloc(3)s to. When your program is finished, you can run the mtrace(1) perl script (also supplied with glibc) to run through this log, and print out a list of all unfreed memory, all freed, unallocated memory, all double-freed memory and probably a bit more besides. It's really handy.
I tend to put the "#include " and "mtrace()" calls inside "#ifdef HAVE_MTRACE" guards, and then add "-DHAVE_MTRACE" to my CFLAGS when compiling debug builds.
The documentation for this can be found at http://www.gnu.org/manual/glibc-2.2.3/html_chapte
malloc() and free() are weak symbols.
glibc's copy of free(3) is a `weak' symbol in the library. What this means is that you can write your own functions called malloc() and free() in your program, and those will be called all the time, instead of the proper ones. You can call the originals with _malloc() and _free, or __malloc() and __free() (can't remember which, think it's the first pair.) and do little extra checks and things yourself. (Such as filling memory with bogus data before returning, etc..., to make sure you're not forgetting to zero some bytes here and there for example.
gdb is also really great too and has loads of stuff that I've not found in other debuggers. Check out the manual sections on `ignore' (to ignore a breakpoint x times to catch the (x + 1)th malloc), and `commands' (to automatically print out variable values and continue for example) w.r.t. breakpoints.
http://www.gnu.org/manual/gdb-5.1.1/html_chapte
http://www.gnu.org/manual/gdb-5.
Try Bell Labs vmalloc (Score:1)
Simple GCC macro makes bounds checking a snap (Score:1)
and very little overhead. The C macro Bound(), defined below, make it very simple. Here is a demonstration:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
struct _bounds {
uint32_t lower;
uint32_t upper;
} __attribute__ ((aligned (4)))
#define Bound(X,Y) __asm__ ( "bound %0,%1\n\t" : : "r" (X), "m" (Y) )
#define UPPER_BOUND(X) (sizeof(X)-1)
#define LENGTH 15
static char test_array [LENGTH];
struct _bounds limits = { 0, UPPER_BOUND(test_array) };
void
bound_test (int index)
{
Bound (index, limits);
test_array[index] = 'a';
}
* We can invoke our test procedure bound_test() by entering
* an array index on the command line. If the index is out
* of range for the bound_test() procecure, the x86 "bound"
* instruction will trigger a core dump.
*/
int main(int argc, char *argv[])
{
if (argc > 1) {
bound_test (atoi(argv[1]));
}
return 0;
}
Re:Simple GCC macro makes bounds checking a snap (Score:1)
character arrays. Here is a more general version:
#define UPPER_BOUND(X) ((sizeof(X)/sizeof(X[0]))-1)