Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Programming IT Technology

Valgrind 1.0.0 Released 301

Anonymous Lazy Boy writes "Yesterday saw the official release of Valgrind 1.0.0. Valgrind is a C/C++ programmer's dream come true: effortless memory allocation checking, uninitialized memory access, leaks etc. Purify for Linux has arrived, only better: contrary to its commercial (non-Linux) sibling, checking is performed directly on the executable, no re-linking necessary. The technology behind Valgrind is highly fascinating and explained down to the very gory details in the documentation."
This discussion has been archived. No new comments can be posted.

Valgrind 1.0.0 Released

Comments Filter:
  • sorry, that was off topic, but this C++ this is happy news for me
  • Any reviews? (Score:5, Interesting)

    by ergo98 ( 9391 ) on Sunday July 28, 2002 @06:16PM (#3968815) Homepage Journal
    One thing I've found with automated QA products is that usually they have critical faults that prevent them from being realistically useful (for instance many of them grind to a halt or give false positives in multithreaded apps). How's this product for real world use? (And no this isn't a "Read the Article!" question...the article is like a press release and hence doesn't answer my question).
    • I Had to debug GDB and Valgrind helped me find memory leaks in this and gdbTK

      GDB is pretty icky so thats a ugly program for you it also managed to debug my ARM/MIPS sim which is small

      overall I give it 5 stars


      john jones
    • Re:Any reviews? (Score:5, Informative)

      by Charles Kerr ( 568574 ) on Sunday July 28, 2002 @07:14PM (#3969004) Homepage
      I've been using Valgrind on Pan [], which is multithreaded, and it works fine. Maybe given more time I'll find features that I miss from Purify, but for now I'm very happy.

      Things I like better in Valgrind:

      • Valgrind works on Linux.
      • Valgrind doesn't require instrumenting each object file and library at build time. (This is a biggie)
      • Valgrind's run-time options are more flexible.
      • Valgrind works with both gcc 2 and 3.
      • Valgrind seems to run faster than Purify. (Different hardware and OSes, so this is a guess.)
      • Valgrind doesn't have a Motif GUI. ;)
      • Valgrind doesn't have an insane, broken license manager.
      • Valgrind's technical support is better. (Yes, I've dealt with both.)
      • Valgrind doesn't cost $2,364 [] per seat.

      Things I like better in Purify:

      • Purify can handle static libraries.
      • Purify makes it easier to disable errors/warnings from libraries out of your scope.
      • Valgrind doesn't work on Solaris, so I'm stuck with Purify for my day job. :)
      • Things I like better in Valgrind... Things I like better in Purify

        I've used Valgrind and Purify before and I find Valgrind much more useful. The big thing for me is that it runs on Linux, which is what I use for development at the office. This is the death of Purify for me, because I can get much higher performance Linux boxes than Solaris boxes for a reasonable price. The additional price of Purify to me is just adding insult to injury.

        I'm not sure what you by "Purify can handle static libraries" because I use static libraries with Valgrind all the time.

        The only thing that concerns me about Valgrind is the strange way that it tracks uninitialized varliable references: it only checks values when they are used for an index or a branch (the last time I read the documentation). Apparently it restricts to this case because people may copy around values or arrays that are uninitialized. I certainly don't do this. And I do really want to know when an uninitialized variable is first read.
      • I've used Purify for nearly a decade, and have been banging on valgrind for several months as well. I think that two of your three criticisms of valgrind are off base.

        valgrind copes with static libraries. The only requirement is that the executable be linked to at least one dynamic library, and glibc will do. This is needed to allow valgrind to get control. Since for Purify you need to have the .o files so you can link, in the analogous situation with valgrind you can always link dynamically with the C library.

        Valgrind has a suppression file that is much like that of Purify.

        The only point you raise that is completely valid is that valgrind is x86-only. Those who really care might want to work on a port to other architectures, though it's a big job: you need a complete virtual processor.

        Purify's GUI can be quite useful, but it would be preferable, if someone wants to do the same for valgrind, to implement any GUI as a completely separate program. That way the KDE and Gnome people can assure that there are at least two of them. :-)

        Finally, to be fair I suspect that a Purify'd executable is faster. But then, you don't have to do a special, expensive link step, so the compile-debug-recompile flow feels faster with valgrind.

      • Re:Any reviews? (Score:2, Informative)

        by Sanga ( 125777 )
        # Purify makes it easier to disable errors/warnings from libraries out of your scope.

        Details about Valgrind suppressions at: .html #suppfiles
    • We've been using Valgrind to find memory leaks in my open source project []; it caught a few subtle memory leaks which we didn't catch in our six months of testing by hand.

      I've been very pleased with it.

      I can not comment on how easy it is to use because other developers on the team have been using it instead of myself.

      - Sam

  • An excellent tool (Score:5, Interesting)

    by alriddoch ( 197022 ) on Sunday July 28, 2002 @06:29PM (#3968858) Homepage
    Valgrind really is an amazing bit of software. Working on large application which use many different libraries it becomes harder and harder to work out where those bugs are, and all the free tools I have tried so far have done a very poor job of finding them. I have now been using valgrind for several months, and got 1.0 straight from the author by mail having reported a few bugs in earlier versions. It speeds up finding those hard to reproduce bugs, and often shows up memory errors which you didn't even know were there. It is also excellent for detecting memory leaks as it knows the difference between memory that has been genuinly leaked, and memory which is not freed, but still has a reference to it stored when the program exits. All the software I work on is now much more robust than it was a few months ago, and much of this I can put down to valgrind being available. This is the only free tool that comes close to the commercial tools like Purify, and in many ways it is superior to some of the expensive high end tools. The author is extremely responsive and helpful, and has been developing valgrind full time self funded.
  • by jelle ( 14827 )
    I've been waiting for this. I've seen the malloc debuggers and the like (electric-fence, gccchecker, etc), but they're all incomplete, have problems with C++ code, or are just for allocated memory ('new'-ed objects, malloc()-ed data, etc), not for regular vairables: statics, local variables, etc.

    But valgrind seems to be just right I gave it a quick tryout and it is looking good!


    apt-get install valgrind.

    And all we need now is a gvalgrind, and/or a kvalgrind gui interface just like purify has and I'm all happy.

  • One of the many great things about purify is that (IME) it only slows down your code by 10-20%, which is small enough that you can always leave it in your code. Leaving it in for unit testing, integration testing, system testing, beta testing, etc., can make your life much easier.

    Valgrind, however, runs your code 20-50 times slower, which means you can't have it on all the time. This is unfortunately, for it looks like a great tool, otherwise.
    • by cant_get_a_good_nick ( 172131 ) on Sunday July 28, 2002 @07:30PM (#3969050)
      I think this is a bit misleading, it's actually a Linux/x86 virtual machine. valgrind is an environment, not just a library you link to. You don't "enable" it on your binary, you need to specifically run something under this VM. It's more akin to running something through the debugger "hey, lets do our daily/weekly valgrind run" than something you could run all the time. Or maybe do it when you have specific errors and wnat to smoke them out. It's a totally different type of tool.

      I think the VM concept is quite clever. It would be interesting to see debates about it. On the good side, it cheks EVERYTHING, not just stuff you turned the switch on for. Even bad system libraries (it has switches to turn these off so you don't get deluged by them). On the bad side, it's obviously Linux/x86 only. I guess it pays to keep your code portable. I'm in a SPARC/Solaris only shop, but I could see myself keeping things portable to linux enough to run this, say once a week to ferret out bugs.
  • recursivity (Score:4, Funny)

    by alvi ( 95437 ) on Sunday July 28, 2002 @06:58PM (#3968950) Homepage

    [chicken]~> vagrind valgrind

    seems to work ok. But valgrinding a valgrinded valgrind causes some ugly errors and asks for a bug report. Well, I know this isn't fair. :)

    Anyways, this looks like a really sweet tool.
  • If you're woried about memory allocation, use garbage collection: urce []

    And contrary to what you may think, it's qiute easy to use:

    variable = new (GC) my_class;

    Or even easier: make your classes derived from gc.

    In C, you just replace malloc.

    And I have found that there is no slowdown wen using a garbage collector. It's nice, and keeps the code clean. Try it someday.

    • There are two camps when it comes to this sort of thing. One says that developer time is more important than processor time (you seem to fall into this one) and therefore GC is a great thing. I would say that 80% of development projects fall into this category.

      But in the other 20%, performance is more important than development time, and for these projects, GC is clearly a bad idea. In some situations even C++ is a bad idea. You really really wouldn't want to write an operating system with C++, much less with GC. There are times when developers need complete control, and high-level languages and features like GC take that away.
      • by p3d0 ( 42270 ) on Sunday July 28, 2002 @11:21PM (#3969752)
        You really really wouldn't want to write an operating system with C++, much less with GC.

        Be was written in C++, and so is K42 [], IBM's next big massively scalable Linux-compatible kernel. Some of the smartest people I have ever met work on K42, and these guys know C++.

        Also, GC doesn't necessarily add any overhead to programs: it depends on memory usage patterns, but clearly, being forced to free everything chunk-by-chunk as it's no longer needed can't always be the most efficient policy. (Otherwise, why do program call stacks use special-purpose storage management instead of the heap?)

        Having said that, it is true that a conservative collector is not suitable for all memory allocation needs.

        • by cpeterso ( 19082 ) on Monday July 29, 2002 @01:44AM (#3970121) Homepage

          The Be kernel was written in C, not C++. Be's user libraries were written in C++.

          Interestingly though, Apple does use (a simplified subset of) C++ for Mac OS X device drivers. See the Apple IOKits.
        • Just because it's been done doesn't make it a good idea. You obviously know something about OS programming (though someone corrected you about the Be kernel). But you can't deny that the majority of modern operating systems are written in C, and not C++.

          My point is that there is no GC algorithm which you can write (ahead of time) for a given application that is better than the GC implementation I can custom-write for my application. I can analyze my code and decide for myself when memory should be freed. It's the same thing with compilers; compiler-generated code is never going to be better than the hand-written code of a good programmer.

          Now, everything is a tradeoff, and I'm all for compilers and GC when appropriate. But another thing I've noticed is that programmers get dependent on GC. More than that, a program written for GC (i.e., without any free() or delete calls) cannot be ported to a system which doesn't support GC.
      • Some folks on the lightweight-languages list have pointed out that malloc/free are not instantaneous, either, so if a program needs to allocate and deallocate lots of memory, using a GC can be more efficient than using malloc/free -- it all depends on the subtleties of the algorithm you use and the way your GC or malloc library is tuned.

        In any case, a GC doesn't always save you from memory leaks. Contemplate the difference between live and reachable objects. A live object is one that the program will actually use at some point in the future. A reachable object is one that the program could use. GCs reclaim un-reachable objects, not un-live ones.

    • OK, first, what we seem to agree on...

      • Yes, GC makes your code cleaner.
      • Yes, GC is easy to use.
      • Yes, GC usually does not slow down your code. (That is, what you gain in the ability to use better algorithms is usually greater than the cost of running a garbage collector.)


      • It does not solve all of your memory problems. C++ still lets you write out of the end of memory blocks. (Not that you should be using C-style arrays in C++ if you can at all avoid it.)
      • It does not interact well with several well-used C++ idioms, such as the resource acquisition is object creation idiom.

      The latter, IMO, is the more serious of the two. If an object holds a resource, then until it is destroyed, the resource is held. Even the best general-purpose garbage collectors you can find today do not guarantee a maximum time between an object becoming unreferenced and being cleaned up. This goes double for conservative GC, where the resource might not be freed until everything which looks like a pointer to the resource-holding object disappears.

      So in summary: I'm a big fan of GC, but it doesn't solve all my problems, and that's especially true in C++.

  • Correct md5sum (Score:2, Informative)

    by Anonymous Coward

    I have just verified that we have no evidence of
    a backdoor in valgrind.
    This is the correct md5sum
    76c59f7f9c57ca78d733bd956b4d94ae valgrind-1.0.0.tar.bz2

    I will put this information also online on bz2. md5sum

    So you can check this information via a second channel.

    -- martin
    P.S.: The AC claims incorrectly that exact the above md5sum indicates a compromised archive which is plain wrong!
  • It looks like it has its own instruction level simulator that does binary translation and runs a lot faster than Bochs. It may not try to simulate privileged instructions, but maybe that could be added, so you could run operating systems under Valgrind.

    Could some kind of merge be possible, adapting Bochs to use Valgrind's simulator without the malloc-checking stuff? Also, I wonder if Valgrind could be adapted to simulate other CPU's besides the x86.

  • by Anonymous Coward
    1.0 is available in unstable.

  • Valgrind makes you realize how much excess CPU time and memory we have available today. The thing has huge memory and speed overheads, yet today's machines are fast enough, and have enough memory, that valgrind's big-hammer approach to the problem works.

    Still, why not do it that way? Machine resources are cheap.

  • ... but I try not to dream in and/or about C/C++. It hurts. :)

    On the other hand, dreaming in perl is probably pretty close to the programmers version of an acid trip. The colors!
  • For most applications, it just makes better sense to avoid these errors altogether by using a good garbage collector.

    An excellent implementation is the Boehm-Demers-Weiser (commonly referred to as just "Boehm gc") conservative gc []. It can be used for C/C++, and is highly portable. It's a real-time, non-compacting (so you still get heap fragmentation like managing memory by hand, but the collection time is shorter and it's more portable), and uses a conservative mark-sweep algorithm [] (briefly, treats anything that looks like a pointer as a pointer, to avoid costly checks or increase portability in the case of C/C++.)

    For a moderately large amount of garbage, the incremental collection pauses take less than about 5-10 milliseconds (hence why it's a real-time collector) on a PIII-500, the algorithm scales fairly well, and it's suitable for all but the most time-critical (anything video related) or memory-thrashing (I really don't know of any app that needs to be) programs. GC will speed up development time tremendously, and can eliminate segmentation faults and memory leaks for most programs. I really don't understand why more projects don't use it.

    That being said, Valgrind does seem extremely useful for projects that do need to allocate memory manually. It looks very convenient to use, and the thoroughness of the checks is impressive. The implementation does seem a little uncomfortable to me - it's certainly a lot of effort to write a whole virtual machine just for the task! The portability prospects aren't appealing either.

  • (let ((I (get '*lisp* 'programmer)))
    `(been I ,no-worry-about-GC)
    (setf (get 'I 'skill) '(good programming techniques))
    (learnt (have I) 'to-code
    :ease-of-use t))))
  • A lot of people seem to be saying, "Just use a good GC".

    But from reading the Valgrind docs, it's more than just a GC. It checks for accessing memory through uninitalized variables, too... Very cool.

    Disclaimer: I have not used valgrind yet, just read the online docs
  • Win32 has a not-so-secret advantage: VirtualQuery(), a random-access, binary structure interface to /proc/pid/maps. It's painful being required to parse a sequential text file that does not quote filenames in order to ascertain the status of the address space.

    And wouldn't you really rather be told when the dynamic linker has added or removed modules? The current interface to DT_DEBUG, _r_debug.r_brk, _dl_debug_state might be suitable for a controlling process, but it's painful for a same-process debugger. For instance, on x86 there might be only one byte of code at _dl_debug_state(), so you cannot easily overwrite it with an arbitrary transfer of control. And if you use an int3 and SIGTRAP handler, then you cannot run gdb at the same time.

    • I'm not sure if this is the same as VirtualQuery() but I think it would be nice if a lot of the structured files were divided down into many more and much smaller files. So for instance that filename you want is actually the contents of a single file. I would expect this is easy with proc(). It should also be done with all the machine configuration but I expect reiserfs would be needed for that to be efficient. This would give the few advantages of the Windows calls (ie the registry calls do get you a single field without any more parsing needed) but also allow normal filesystem tools to be used.

      Of course I'm not sure if there is any chance of this happening because it will break all the tools everybody is using now...

  • No offence, but if you need a 3rd party tool to check if the output of the compiler, which was produced after the compiler found that the input (your code) was correct, is correct, perhaps it's time to take a step back and think about a better solution that will fix what I call the 'weak spot' of C++: you have too many stuff to take care of, it backfires on you.

    Today, the software that's used to develop software should be there to help developers write solid code from the start without overhead, without the necessity to program the plumbing code which makes your program logic run in the first place.

    Some people here have suggested that GC is a better solution than just feed your compiled binary to another tool in the chain of tools and I agree. When you look at .NET and the Java platform, you'll see that there isn't a necessity for the overhead you have to program with C++, so there's also no need for tools like purify or valgrind.

    For solutions where C++ is the only way to go, it's probably a welcome addition to the set of tools to work with, but for C++ in general it isn't IMHO, it only shows where C++ should be improved, or better: where the RTE of C++ should be improved so these tools are obsolete in the future.
    • It's not so much a problem of C++ as a problem of the sloppyness of the programmer. Keep track of all the new and deletes in C++ is difficult. However GC, as found in Java, is not a magic bullet.

      It is possible to leak memory in Java too (by making circular structures).

      Even worse, because objects stay around until the last reference to it is gone, it is possible to have several copies of the same object lying around. This can lead to very hard to find errors.

      For instance:

      An object A might contain a reference to object B.
      An object A2 might also contain a reference to object B, but to an older version. (Due to update error or so).

      You think they both point to the same object B, and use A and A2 accordingly. Your program will not crash, because both references point to a valid object, but it won't function correctly either. It might fail only after a very long time, due to other errors that are induced by this error. This will make it very hard to find the error.

      Have fun,

      • It's not a magic bullet as in 'take this pill and you're free of trouble!', but it's a great start to finally do something about it.

        What I find fascinating is that when it comes to the lack of help provided by C++ for the programmer, the frase 'but it's the sloppy programmer!' comes up a lot as the possible reason why programs written in C++ have memleaks and other crappyness.

        This is totally a proof a lot of people are 'in denial'. True, memory leaks are possible in Java and f.e. also in .NET (CLR boxing stuff can lead to memleaks, albeit small), but most developers using these platforms like Java and .NET will not create code that can lead to a memleak or buffer overflows, stackcorruption, memory corruption etc.

        C++ forces the developer to create EXTRA code to make it run perfectly (i.e.: without memleaks, stack corruption etc). To me, that's fine when there is no other choice (read: technology isn't on a level where a RTE can be fault tolerant and fault preventing) but in 2002, technology IS at that level where RTE's will make sure developers don't make mistakes and languages using these RTE's are nowadays able to provide a working environment for the developer where that developer doesn't have to write that EXTRA code to make the logic run correctly, that's already in place, provided by the language/compiler combo and / or the RTE.

        Perhaps C++ will get extended to meet those requirements, but I fear a lot of those self-called 'Elitists' now using C++ will not use these extra features simply because (according to them) then 'everybody' can use C++ and thus C++ will 'degrade' to a language of choice for the 'average' programmer, not stay a divider of who's good and who's not.
  • by DarkHelmet ( 120004 ) <mark.seventhcycle@net> on Monday July 29, 2002 @07:41AM (#3970664) Homepage
    effortless memory allocation checking, uninitialized memory access, leaks etc

    It's not a leak damnit! The Operating System happens to either "Like" the program more, or feel sorry for it.

    Some OS's are more generous than others. Windows often is willing to sacrifice itself and die to keep its program's memory fed. A trait rarely seen in humans.

    Leave it to programmers to make memory allocation sound like such a cold thing. And leave it it up to a program that sounds like "Meat Grinder" to undermine your OS's generosity.

    Bastards. Cold, heartless, ruthless bastards!

  • Valgrind is neat, but as other posters have noted, using it or Purify add substantial overhead. I wonder why it is that there are (to my knowledge) no available binary instrumentation tools for the x86, upon which one could build a non-intrusive sampling-based profiler.

    ATOM, from DEC, is a wonderful tool that allows you to instrument existing binaries at nearly any level of granularity, but it only works on the Alpha. You can do amazing things with binary instrumentation tools, going way beyond the kind of low-overhead profiling I'm talking about. Etch, from Washington, seemed to fill some of that niche but appears to be dead and gone. I know that some at Microsoft Research would like to eventually release their improved version of ATOM called Vulcan (which can instrument running executables!), but in the meanwhile, I'm surprised that the open source community hasn't jumped in.
  • If people start using this the way that I've seen purify used... this will become an excellent excuse for shoddy programming practices because now the PROGRAM can check it for you.

    Hopefully this will be used as a tool to check and repair rather than a tool to allow for crappier code.

FORTUNE'S FUN FACTS TO KNOW AND TELL: A giant panda bear is really a member of the racoon family.