Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming IT Technology

C Coding Tip - Self-Manage Memory Alllocation 142

An anonymous reader inputs: "The C programming language defines two standard memory management functions: malloc() and free(). C programmers frequently use those functions to allocate buffers at run time to pass data between functions. In many situations, however, you cannot predetermine the actual sizes required for the buffers, which may cause several fundamental problems for constructing complex C programs. This article advocates a self-managing, abstract data buffer. It outlines a pseudo-C implementation of the abstract buffer and details the advantages of adopting this mechanism."
This discussion has been archived. No new comments can be posted.

C Coding Tip - Self-Manage Memory Alllocation

Comments Filter:
  • Um (Score:4, Interesting)

    by be-fan ( 61476 ) on Wednesday January 07, 2004 @08:17PM (#7909044)
    *cough* garbage collection [hp.com] *cough*
    • Re:Um (Score:5, Interesting)

      by Screaming Lunatic ( 526975 ) on Wednesday January 07, 2004 @09:04PM (#7909407) Homepage
      Which means you don't have predictable destruction. Which means you don't have destructors. Which means you can't use idioms like resource-allocation-is-initialization.

      So in the presence of exceptions, you won't leak memory on the heap. But you will leak mutexes, file handles, etc. You need another idiom to handle those cases.

      In the .NET world, C# introduced synchronization blocks to handle the leaking mutex problem. But it is a pain in Managed C++ and VB.NET.

      Garbage collection is not the be all and end all.

      If I ruled the world, I would create a multi-paradigm (object-oriented, generic, functional, and modular support) strongly-typed low-level language that let you program at a high-level. A second high-level langauge that was loosely-typed, garbage collected, and could be interpreted or natively compiled. Then I would define a standard to interface the two languages.

      In other words, take C++ and add the concept of components/packages. Take Python and add the features (such as generics) that are missing from C++. And then define an interface between components written in both langauges.

      Currently Boost.Python and SWIG exist. But I wish that they would just work automagically, everytime I typed make at the command line or build in VC++.

      • Re:Um (Score:5, Insightful)

        by be-fan ( 61476 ) on Wednesday January 07, 2004 @09:08PM (#7909433)
        If I ruled the world, I would create a multi-paradigm (object-oriented, generic, functional, and modular support) strongly-typed low-level language that let you program at a high-level. A second high-level langauge that was loosely-typed, garbage collected, and could be interpreted or natively compiled. Then I would define a standard to interface the two languages.
        ----------
        You just described Scheme/CL/Dylan.
        • Scheme! I had to program in that for an intro CS class. Basically the point was to teach the class in a language nobody knew to filter out all the smartasses who had read an "intro to C++" book. Fun little language, though it's applications are a bit limited...
          • Re:Um (Score:1, Insightful)

            by Anonymous Coward
            If you think that was the point, then you completely missed out. I guess that is to be expected from somebody that went to UT.
            • Heh, probably true, as a few classes down the line I decided I didn't want to be a programmer anyway :) I'm a sysadmin dammit :(
      • Re:Um (Score:3, Funny)

        by E_elven ( 600520 )
        > Which means you don't have predictable destruction. Which means you don't have destructors.

        My GC does something like this:

        1) Allocate memory, reference count 1 ...
        N-1) When ref count reaches 0, we call destroy() on the object ...
        N) Free the memory
        • by Yokaze ( 70883 )
          And something like this:
          1) Allocate object A(1) (reference count 1), B(1) and C(1)
          3) A(1)->B(2) (Reference B by A)... A(2)->B(2)->C(2)->A(2)
          4) Dispose root references to A(1),B(1),C(1)
          5) Have fun with your memory leak.
      • Re:Um (Score:3, Interesting)

        by egomaniac ( 105476 )
        Which means you don't have predictable destruction. Which means you don't have destructors. Which means you can't use idioms like resource-allocation-is-initialization.

        So in the presence of exceptions, you won't leak memory on the heap. But you will leak mutexes, file handles, etc. You need another idiom to handle those cases.


        Java is probably the most widely-used garbage-collected language in existence. I think I speak for all Java programmers when I say "WTF are you talking about?"

        It is true that you
        • Re:Um (Score:5, Interesting)

          by Curien ( 267780 ) on Thursday January 08, 2004 @11:01AM (#7915190)
          Java is probably the most widely-used garbage-collected language in existence. I think I speak for all Java programmers when I say "WTF are you talking about?"

          He's talking about the unpredictability of resource release using GC. If unpredictability isn't a problem, fine. If you need to synchronize your resources carefully (which is what a mutex is /all about/), then you've got a problem.

          Now, this article is about C, so let's compare the two.

          The post you're responding to wasn't about C: it was about a weakness of GC compared to, say, RAII (which is the idiom C++, among others, uses). But just for fun, let's go on to see how little you know about C.

          Java: failing to close a file == usually no problem whatsoever

          Unless you need to open the file again before the garbage collector decides to reclaim the handle.

          C: failing to close a file == permanently leaked handle

          Bzzt... wrong. All file handles are released upon program termination. I like how you used '==' to try impress people with your programming skillz, though.

          As far as the other case you mention, mutexes, goes, Java has two means of providing mutual exclusion. The "synchronized" keyword

          Wait, wait... you're saying... hold on a second, now... that Java uses a different idiom to handle mutexes? That's exactly what the parent post said it would have to do... because GC isn't as useful as RAII when it comes to general resource allocation (not just memory).

          But you make it sound as if garbage collection is a step backwards from malloc/free

          He made no such comparison. He compared it (unfavorably) to RAII.
        • Re:Um (Score:2, Interesting)

          by moocat2 ( 445256 )
          Yes, the article is about C, but the poster was talking about predictable destruction, so he was not referring to C when dissing GC.

          My guess is that he or she was referring to C++ using the RAII (resource acquisition is initialization) paradigm. Every resource is wrapped in a object and all objects are put on the stack. Voila, no resource leaks.

          Also, I think your example is far from complete. While failing to close a file handle may be "no problem whatsoever" for the application itself, but can be a pr
      • Re:Um (Score:2, Insightful)

        by Mikkeles ( 698461 )


        If I ruled the world, I would create a multi-paradigm (object-oriented, generic, functional, and modular support) strongly-typed low-level language that let you program at a high-level. A second high-level langauge that was loosely-typed, garbage collected, and could be interpreted or natively compiled. Then I would define a standard to interface the two languages.

        In other words, take C++ and add the concept of components/packages. Take Python and add the features (such as generics) that are missing from

        • I don't think of C++ as strongly typed. typedefs don't enforce, e.g., different meanings of ints (so number of apples vs. number of oranges can't really be distinguished without creating multiple classes).

          Would you call C++ strongly typed if the keyword would be "typealias" instead of "typedef"?

          Even enums aren't distinguishable from ints!

          Bzzt, wrong! Try to assign an int to an enum without a cast... You'll get an error.

          • "Would you call C++ strongly typed if the keyword would be "typealias" instead of "typedef"?"

            I'm afraid I only know of 'typealias' in Fortran (where it is synonymous with 'typedef') and in the Meta Object Facility (MOF) of CORBA. I have found reference to a python class of that name, but know nothing of it.
            Would you be able to provide me a reference? Ta.

            Even enums aren't distinguishable from ints!

            Bzzt, wrong! Try to assign an int to an enum without a cast... You'll get an error.

            You are correct in

            • I have found reference to a python class of that name, but know nothing of it. Would you be able to provide me a reference? Ta.

              I was trying to make it clear that typedef doesn't define or declare a new type, but an alias to an existing type. IOW the keyword "typedef" is misleading, but it doesn't make C++ weakly typed.

              The cast, however, provides no type checking.

              Yes, that's a design feature of the language. The compiler errs out if you implicitly try to cast incompatible types, but when you explicitl

      • If I ruled the world, I would create a multi-paradigm (object-oriented, generic, functional, and modular support) strongly-typed low-level language that let you program at a high-level.

        And if I ruled the world, me and Salma Hayek would... I'm sorry, what was the question?
      • Which means you don't have predictable destruction. Which means you don't have destructors. Which means you can't use idioms like resource-allocation-is-initialization.

        In standard Python you have both garbage collection and predictable destruction.

        In other words, take C++ and add the concept of components/packages. Take Python and add the features (such as generics) that are missing from C++. And then define an interface between components written in both langauges.

        If you are using Python for your

      • Re:Um (Score:2, Informative)

        by Whip-hero ( 308110 )
        So in the presence of exceptions, you won't leak memory on the heap. But you will leak mutexes, file handles, etc. You need another idiom to handle those cases.

        In the .NET world, C# introduced synchronization blocks to handle the leaking mutex problem. But it is a pain in Managed C++ and VB.NET.


        File/mutex/whatever leakage in the presence of exceptions? Two words:

        unwind-protect
  • Hmmm... Didn't this go into Unix kernels almost a decade ago - Thank you Sun
  • by c ( 8461 ) <beauregardcp@gmail.com> on Wednesday January 07, 2004 @08:34PM (#7909151)
    Anyone who's done C coding for more than, oh, a day would have already figured that out. It's not a coincidence that every programming language that doesn't have "smart" arrays built into the language ends up with some sort of buffer class (Java's ByteStream class, C++'s stream IO buffers, etc).

    The fundamental problem is that this sort of thing needs to be done at the C library level. And if it's not done in a flexible fashion, you end up with a library call that rarely gets used. Anyone used hsearch() lately?

    If only clib streams (FILE* and friends) were extensible, this article would never have had to be written.

    c.
    • The fundamental problem is that this sort of thing needs to be done at the C library level. And if it's not done in a flexible fashion, you end up with a library call that rarely gets used. Anyone used hsearch() lately?

      Take a look at, Vstr [and.org] I think it's pretty flexible ... it certainly has much better researched documentation than the content for this IBM "research" article.

      • Comprehensive, that's for sure, but the examples look like it's also reinventing FILE* in the io_* API.

        glibc's fmemopen() moots most of the IBM article, I think, but since I don't code exclusively in a glibc environment... Grrr... If only POSIX specced out FILE* a bit tighter...

        c.
        • Comprehensive, that's for sure, but the examples look like it's also reinventing FILE* in the io_* API.

          The io_* API is part of the examples, and not in the library itself. But it's a pretty small wrapper over what's in the library ... and, yes, the library itself was designed so it could do non-blocking IO [and.org] (which FILE* can't). So in that regard, yeh, I don't tend to use FILE* anymore.

          glibc's fmemopen() moots most of the IBM article, I think, but since I don't code exclusively in a glibc environmen

  • Please (Score:5, Funny)

    by YellowElectricRat ( 637662 ) on Wednesday January 07, 2004 @09:04PM (#7909405) Journal
    This article, I believe, has already been published in the well known programmers' journal "No shit Sherlock - monthly"
  • It is called realloc, that is the real way that people should use to self-manage memory allocation, and something that detects leaks is also needed.
    • How about use another language, like not C? Or use a gc library. I have heard of many, but it really should be part of the language so people will actually use it.
    • Re:realloc (Score:5, Interesting)

      by Permission Denied ( 551645 ) on Wednesday January 07, 2004 @11:20PM (#7910636) Journal
      Regarding realloc:

      When you call realloc, you're very likely to cause the data to be copied from the old buffer to the new buffer. This is very high overhead. The article discusses how to do similar things, but without this unecessary copying (eg, low overhead). It's actually not that interesting of an article as what it describes is hardly new and I believe any competent programmer could come up with that solution when faced with the particular circumstances that inspired it.

      Realloc works by seeing if there is free memory after the end of the allocated block, and changing the block's size if so. Realloc can do this because it knows about the internals of the malloc/free implementation. If there is allocated memory right after the block in question, a new block must be allocated, as you cannot "move" the later block in a language like C where any memory location can be a pointer. You could try this kind of stuff in other languages (or in some bastardized C where you do not have direct access to memory, but go through more indirection, the next logical abstraction after the article), but when you start automatically finding/checking/updating memory pointers, you get into GC.

      You may be able to overcome some overhead on realloc if you move the problem down into the kernel. The kernel could play page table games so there is little or no actual copying involved, just updating of page tables. This would be fairly easy to implement, but I don't think anyone's done it because (a) flushing the relevant TLB entries could hurt performance more than the copying, and (b) the system call overhead might be more overhead than the copying. Realloc is generally only used for small buffers (due to programmers knowing about the copying overhead) and this trick would only have gains for large buffers spanning multiple pages. For small buffers, the library-level realloc could avoid the system call and do the copying itself, avoiding system call overhead and TLB entry flushes.

      This scheme I describe could make for an interesting paper (especially determining for what size of buffer and what type of program it has gains), but I doubt it would make much difference in real system performance as programmers avoid realloc for large buffers, and there are very few cases where one needs direct linear access to a large range of memory rather than being better-served by organizing that memory into some data structure.

      • But linked lists are worse on performance than realloc.
        One way to use realloc better is to have a current length and max size and also rellocate more memory than need at the needed time. This cause better locality than a linked list. And you can always use realloc to make the list to the current length. Linked lists cause many problems when it comes to prefetching and fast memory access so they are slow when you are accessing things down a list. You can do a tree using one list also, by having an invers
        • Re:realloc (Score:3, Insightful)

          by Nevyn ( 5505 ) *

          But linked lists are worse on performance than realloc.

          Care to back that up with some [and.org] facts [and.org].

          Also linked lists are very prone to leaking and very hard to figure out which one to free first.

          So you a) have the library code do that ... and b) have lots of tests. Of course you want to do both of those for something using a single block of memory too, and if you want it to be efficient it usualy does something clever to avoid copies ... and so is probably more likely to screw up and use/deallocate th

          • ...But linked lists are worse on performance than realloc.
            Care to back that up with some facts.

            Linked lists require links -- this puts pressure on the L1 cache. Linked list access is also necessarily serial, meaning that you lose all parallel execution potential (from high level software all the way down to the CPU core). Sub-blocked operations require full iteration through the intersection. And finally, of course, there is the small matter of random access.

            • Re:realloc (Score:3, Informative)

              by Nevyn ( 5505 ) *

              Linked lists require links -- this puts pressure on the L1 cache.

              I said facts, not theories. Yes, I know all about cache and his friend random access. Maybe you'll take a noticable cache hit when moving between nodes ... and yes, certainly doing memcpy() over a single large block X will be faster than doing it X/20 times for 20 byte nodes (20 was the figure given in the IBM "research" article). But that doesn't take into account the time taken to call realloc() each time to expand the large block ... o

        • Also note that many modern incarnations of linked lists use an allocator that requires sequential memory and make their own chunks from that.

          AFAIK even glib has these.

      • When you call realloc, you're very likely to cause the data to be copied from the old buffer to the new buffer. This is very high overhead. [...]
        The way you deal with this is by not doing it very often. See: The Better String Library [sf.net].
      • Realloc can work well if you always malloc the largest buffer you possibly need, then realloc smaller when the real size is known. For that to work well, the allocations either need to be serial (that is malloc, load, realloc), or you also have a need for smaller blocks that will fit in to the space made by reallocing down.

        In some systems such as where you are the only process on the CPU, you can do compression (copy down) when waiting for external events (like an interrupt). At that point, the cycles and

  • by rmohr02 ( 208447 ) <mohr.42@[ ].edu ['osu' in gap]> on Wednesday January 07, 2004 @09:16PM (#7909511)
    Just like slashdot allocated extra space for the third "l" in "alllocation".
  • Hmmm (Score:5, Interesting)

    by JMZero ( 449047 ) on Wednesday January 07, 2004 @09:29PM (#7909606) Homepage
    Many small programs are no longer memory, or even performance, constrained. As such, a reasonable strategy for a lot of desktop software is to allocate a huge buffer at startup, and do repetitive flushes and complete reloads of data (always using the same pre-allocated buffers).

    This is simple to do, and avoids a lot of errors. It's also not much of a headline.
    • Re:Hmmm (Score:3, Insightful)

      by schmaltz ( 70977 )
      reasonable strategy: allocate a huge buffer ... do repetitive flushes and complete reloads of data ...

      Um, this is reasonable how? If you're coding on a 3GHz dream machine w/4GB 400mhz RAM, there will be somebody out there who, quite reasonably, will want to run your code on a 64MB Pentium 133... Testing only on unencumbered machines makes for delusional developers. ;)

      I'm split: either yer trollin' /., or you gotta tell us what OSS projects you've contributed to, with the above philosophy!
      • You have to understand that most software people write isn't like what you're thinking. If you are writing software for a large audience and long term use, you obviously have to be more careful with your strategies. For many apps, though, you don't require this sort of robustness - and you probably aren't going to spend enough effort to do everything well. As such, if you're micro-managing memory then you are likely also creating memory leaks and bugs.

        Also, I'm not suggesting you allocate a 30 meg buffe
      • Note that in Linux and many other OSes, malloc and friends don't actually cause physical memory to be committed to the process, they just create an unmapped virtual area. RAM is only committed when writing to a page area faults a page in. Reading from such a virtual area will fault THE zero page in (one page in the whole system containing zeros), writing to a zero mapped page will fault in a different page.

        So, you can allocate a 1GB buffer and read it all to verify that it contains only zeros, at a cost

    • Re:Hmmm (Score:3, Insightful)

      by Nevyn ( 5505 ) *

      This is simple to do, and avoids a lot of errors. It's also not much of a headline.

      While I agree the IBM "research" article is terrible, the idea behind it isn't.

      Actually having donetests [and.org] and benchmarks [and.org]. I can safely say:

      • It's not the simplest solution.
      • It's certainly not anywhere near fast.
      • I like the strategy IBM describes. My strategy is obviously suitable in different places. My intended point was that I didn't figure either would be novel ideas (headlines) to most programmers.

        Your benchmarks, on the other hand, are a good headline. Going into a project, you usually have a fair idea of your options for memory management and how long they'll take to implement. However, you don't always have a good grasp of the performance implications - your breakdown is handy.
    • Many small programs are no longer memory, or even performance, constrained. As such, a reasonable strategy for a lot of desktop software is to allocate a huge buffer at startup
      It has been my experience that this was done when systems were constrained, ie. test to see if there is enough system memory at start-up instead of running out of memory in the middle of execution. This was apparently so prevelent that some vendors (SGI for one) changed malloc to use a "first touched" memory model. In this model,
      • I'd suggest this approach only in small footprint sort of apps, or apps where performance footprint or convention means that not much else will likely be running (eg. fullscreen games). In many apps, the memory requirements are consistent enough that total demand is going to be the same either way.

        Mostly I just hate people to be doing lots of work in C to save 30k of system memory - and ending up with a buggy program full of memory leaks. Many apps have data sets this small (30k) and yet are spending lot
  • Wow! (Score:5, Funny)

    by arkanes ( 521690 ) <arkanes.gmail@com> on Wednesday January 07, 2004 @09:59PM (#7909812) Homepage
    Man, I never thought of that. An abstract memory buffer. What a concept! I don't need to define the lengths of everything at compile time then!

    Now, I'll need a nice short catchy name for it... oh! I know! I'll call it a heap!

  • Useful (Score:3, Interesting)

    by dtfinch ( 661405 ) * on Wednesday January 07, 2004 @10:20PM (#7909956) Journal
    But this is like teaching calculus students remedial math. The "Level: Intermediate" at the top of the article should have given that away.
  • One of the interacting parties defines the underlying memory allocation mechanism for data exchange. The other party always uses the published interface to allocate or free buffers to avoid possible inconsistency. This model requires both parties to stick to a programming convention that may not be relevant to the software's basic functionality and, in general, can make the code less reusable.

    And the proposed solution requires both parties to stick to the common adbtract buffer interface.
    Hmmm!
  • by MarkusQ ( 450076 ) on Wednesday January 07, 2004 @11:58PM (#7911286) Journal

    From the article:
    • The pLostBlock points to the linked list's last memory block.

    pLostBlock?!? This almost sounds as if it's designed to leak!

    -- MarkusQ

    P.S. Seriously, I think this is a fine idea, if not particularly earth shaking. But the typo was too ironic not to point out.

  • Vstr (Score:5, Informative)

    by Nevyn ( 5505 ) * on Thursday January 08, 2004 @01:11AM (#7911982) Homepage Journal

    The article basically proposes a very bad implementation of Vstr [and.org], most of the advise was extremly simplified at best but more likely just uninformed: an "efficient" abstract buffer that mixes shorts and pointers -- words almost fail me, how to solve the problem of "what do you do with the data when it's all in the buffer" -- "let's just copy it back out again (hey whats a couple of extra copies between friends). Representing in memory object sizes with "long int" *sigh*.

    If you are interested in the article, go read this explanation of why you want it for security [and.org] and this explanation of why you want it for speed [and.org].

    Vstr is LGPL, has actual benchmark data behind the block sizes it picks, has an extensive test suite ... and has documentation for the many functions that come with the library (including a fully compliant printf like function). Of course, I don't have a PhD ... but after reading this, you might well count that as a plus too

    • Representing in memory object sizes with "long int" *sigh*.

      FWIW, this is not as trivial as one might think (from your project, I suspect you're aware of the issues involved).

      I've been working on a program that ptrace()s another program and has to keep track of ranges of memory. Just because I like doing things properly, I figured that using a void * to store memory range offsets would be sufficient. Problem is, C doesn't define some operations that I needed to do (like modulus) on pointers. So I get t
      • by Nevyn ( 5505 ) *

        Representing in memory object sizes with "long int" *sigh*.

        FWIW, this is not as trivial as one might think [snaip ...] I figured that using a void * to store memory range offsets would be sufficient. Problem is, C doesn't define some operations that I needed to do (like modulus) on pointers. So I get the possibility of casting to...an int of some sort. Grr. It took me quite some time to run across intptr_t, which is an integer type guaranteed to be able to hold a pointer.

        Well I meant the sizes, not

  • Not News (Score:4, Informative)

    by SkewlD00d ( 314017 ) on Thursday January 08, 2004 @08:28AM (#7913812)
    Embedded systems do this, they have a pool of Buffers and a BufferManager that allows you to do effectively your own memory mangement (and in some cases, static memory management). malloc() and free() are usually really slow, so if you can save 99% of those calls by reusing memory blocks, you can really speed up your programs.
    • Heck, in some cases, malloc and free don't work very well.

      I do a lot of work in vxWorks. The version that I routinely use in supporting a legacy product (5.4) doesn't do any collection on the free'd spaces. Thus, you can very easily kill the system by doing something like this:
      while (1)
      {
      ptr = malloc(rand());
      free(ptr);
      }

      Eventually you'll wind up where memory is so fragmented that it can't find an appropriately sized chunk for a given size, even though there's plenty of memory available.

      It's easier
      • i worked w/ a metroworks system that could not use operator new( ) nor malloc(size_t) after system init. There was a line of code that would execute, and all memory had to be statically allocated from the master memory pool. We had a Buffer class and different size Buffer's you could alloc from our own BufferManager class, mainly for IP packet and RS-232 packet buffer passing. Strings were statically compiled. AFAIK, the system was fairly stable, and they're weren't generally any memory leaks *grin*.
  • "And C++ programming languages, we own those, have licensed them out multiple times, obviously. We have a lot of royalties coming to us from C++"

    Darl McBride, SCO

    WTF?! It's true, he said that. Read more here [zdnet.com] and here [groklaw.net]
  • So use C++ already (Score:4, Insightful)

    by Animats ( 122034 ) on Friday January 09, 2004 @03:14AM (#7926062) Homepage
    Watching C programmers create home-brew objects is pathetic. If you need to encapsulate data structures, use C++.

    Yes, C++ has a host of problems and Strostrup and the C++ committee refuse to fix them. But the STL is a huge improvement on malloc/free. (They still can't get auto_ptr right, though.)

  • I see some of the point in this, but what's wrong with:

    unsigned char *buffer = 0;
    b(&buffer); /* note: address of */
    free(buffer);

"Show me a good loser, and I'll show you a loser." -- Vince Lombardi, football coach

Working...