A Glance At Garbage Collection In OO Languages 216
JigSaw writes "Garbage collection (GC) is a technology that frees programmers from the hassle of explicitly managing memory allocation for every object they create. Traditionally, the benefit of this automation has come at the cost of significant overhead. However, more efficient algorithms and techniques, coupled with the increased computational power of computers have made the overhead negligible for all but the most extreme situations. Zachary Pinter wrote an excellent article about all this."
An Obvious Fault (Score:3, Interesting)
Re:An Obvious Fault (Score:4, Insightful)
Re:An Obvious Fault (Score:2)
Re:An Obvious Fault (Score:2)
Kirby
Re:An Obvious Fault (Score:2, Interesting)
If O(n log n), then
c*f(n)+k < n log n
defines the worst case scenario for time where c and k are constants and f is the function being used. n is always the number of elements being sorted (for sort algorithms). So, the issue is what k and c are in each of the various algorithms. It might be that c and k are huge for heap/merge sort, but with quicksort as O(n^2), it'd take either a hugely massive c and/or k or really small n for quicksor
Re:An Obvious Fault (Score:2)
Big-Omega, Big-Oh, Big-Omega
If I recall from algorithms class
1) lower bound
2) upper bound
3) average case (or something similar)
So isn't qsort Big-Omega( n log n ), and Big-Oh( n ^ 2 ) ?
PS: "Oh" *is* a greek letter, right?
Re:An Obvious Fault (Score:2)
Re:An Obvious Fault (Score:3, Insightful)
Re:An Obvious Fault (Score:2)
You forgot to say "comparision" based. Sorting can be done in O(n), ala Radix Sort, which can be adapted to sort floating-point numbers in O(n).
i.e.
Radix Sort Revisited [codercorner.com] by Pierre Terdiman
--
"I'd rather be idealistic, so people are inspired at what might be,
Then be realisic and not have any hope of what could be."
Re:An Obvious Fault (Score:2)
Re:An Obvious Fault [in your post] (Score:3, Insightful)
Sigh. It's not a "feature" of other languages... (Score:3, Insightful)
...it's required by them.
Stack-based languges like the C family (including Java) don't need GC to operate correctly, but can use it if it's available. (Java just has it all turned on by default.)
By "correctly," I'm specifically leaving out memory leaks. Your program may leak, but it will still run correctly, give the right answers to computations, not suddenly lose track of variables, etc. (Right up until you run out of swap.)
Those "other languages" the author dumps a list of don't use GC just to free the poor programmer from the burden of thinking, or whatever. Nearly every one of those languages either has support for functional programming, or is centered around it. And in functional programming, you're creating functions on the fly.
Which means returning functions as data. Possibly involving local variables in the creating function. Which means that locally-declared variables have to keep existing after the creating function returns, even if the coder can't get to them anymore. And the only way to do that is to have the runtime system manage its own heap, which means a garbage collector.
So for all those languages, it's not an "ease of use" thing. It's a "there's no way for a programmer to do even do it manually at all" thing. GC is the only option.
Give me break (Score:2)
Re:Sigh. It's not a "feature" of other languages.. (Score:3, Interesting)
What a thing to leave out. Memory leaks are one of the hardest-to-track-down
and most annoying kinds of bugs that we perpetually see in application after
application. Okay, crashes are more annoying and pervasive, sure. And
buffer overruns (which are not a problem in most languages that have GC,
albeit GC is not the reason they're not a problem). But memory leaks are
high on the list.
> And in functional programming, you're creating functions
Re:Sigh. It's not a "feature" of other languages.. (Score:2)
I knew some kid was going to start bitching and moaning about the memory leak comment. I'm not saying they're not important. I'm saying that one has nothing to do with the other.
C, C++, Java. None of these support closures or lambdas. C++'s Boost makes a good try, but none of them allow me to construct new functions using nothing more than standard la
Re:Sigh. It's not a "feature" of other languages.. (Score:2, Insightful)
> > create functions on the fly but is powerful enough for writing real
> > applications.
>
> C, C++, Java.
[Scratches Java off list of languages to learn.]
I know C and C++ have been traditionally used for writing applications, but I
have long been of the opinion that they're not really powerful enough for the
job. It takes several times as many programmer-hours as it ought to to do
anything, from prototyping to new fea
Re:Sigh. It's not a "feature" of other languages.. (Score:2)
and most annoying kinds of bugs that we perpetually see in application after
application.
Well, there are plenty of applications that never need to (or should for optimal performance) release any memory, only shuffle it around, and have well defined points for releasing other resources (such as closing files and sockets) that can't be left to be done in the background. Such applications have no fear of memory leaks
Re:Sigh. It's not a "feature" of other languages.. (Score:2, Interesting)
This is flat-out false. There are various compiled languages (compiled as
in compiled to native machine code, yes) that not only allow creating functions
on the fly but actively encourage it. Common Lisp is just one example. Yes,
garbage collection gets compiled in. (This is no weirder than compiling a
memory-management library into a C program, and actually being standardized
is an advantage.)
Besides that, the whole compiled-versus-
Re:those who claim it can't be done... (Score:2)
Point taken, I was inaccurate. By compiled language I meant basically non-interactive languages, ie languages that you can't make a meaningful interactive interpreter for (without changing their syntax at least a bit).
Though if you really wanted, you could do runtime compilation even in C. Just output C code, compile it into dynamic object, load it and call it. Just wrap doing all that into a nice function, and it's even simple to use,
Are you crazy? (Score:2)
You seem to have a mistaken impression of the way functional languages are imp
Re:Sigh. It's not a "feature" of other languages.. (Score:2)
Correct memory management in the presence of heap-allocated mutable objects require garbage collection, even in C.
There simply is no way around it: you can allocate an arbitrary graph of objects, and what can be freed depends on what is reachable from the roots.
And if garbage collection isn't built into the language, then the language has to be unsafe, like C is.
So, if you want a safe language with heap allocation
Under the Rug (Score:5, Informative)
A previous poster noted that most GC algorithms are distinctly unfriendly to virtual memory systems. They usually have similar problems with cache locality, which can result in an enormous slowdown, regardless of the time actually spent in the GC itself. A practical problem is that GC regimes are notoriously non-portable, so that each new language implementation needs to have the (increasingly complex) GC re-done again.
A more fundamental problem is that memory is only one of many resources a typical industrial program must manage. GC takes over memory management, but leaves the other scarce resources -- file descriptors, sockets, mutexes, database connections -- to be managed manually, as in C. (Java has this problem, for instance.) "Finalization" simply cannot provide the necessary guarantees.
Given a resource management regime that can handle all these other important resources, as is commonly practiced in C++, memory becomes just another resource. Management is encapsulated the same way for all. A language that lacks the tools necessary to implement such a regime needs GC, so the presence of GC may actually (as in the case of Java) indicate a fundamental weakness in the language.
(Anybody who thinks languages like Haskell or ML are fundamentally more powerful than C++ must be unaware of the Boost Lambda library, and of FC++, a set of header files that implements Haskell language semantics for C++ programs. They get along fine without GC, as well.)
Re:Under the Rug (Score:3, Interesting)
Huh? The world's busiest e-commerce websites are largely written in Java. Just what is your definition of "industrial settings"? If you mean that Java isn't used much in a foundry, then I guess you're right.
Re:Under the Rug (Score:2)
Please explain to me exactly how you can implement a resource management system (or regime as you call it) in C++ for, lets say, managing socket connections, that has no equivlent in Java. You are aware of this method [sun.com], right?
Then, please, further explain to me how a task performed by a software platform, in this case th
Re:Under the Rug (Score:2, Interesting)
Okie dokie (pardon the bad formatting):
I think you'll find that there's no equivalent to the above in Java.
This [att.com] may prove to be good
Re:Under the Rug (Score:2)
Ah, but you say....you have to manually call closeSocket() to close the socket! True, but that's no different that your implementation. The only difference is that instead of calling 'mySocket.closeSocket()', I have to call 'free tcpSocket'. Either way, the programmer is responsible for managing the resource
Re:Under the Rug (Score:2)
Yes, but in that case it's still the programmer and not the platform that's managing the resources. Building a nifty little lock management facade around the problem doesn't make it go away. You can, in fact, do the same thing in Java using an object pool and an abstract factory.
Re:Under the Rug (Score:2, Interesting)
The only real ways to do this in Java are twofold:
1) Use synchronize, and only return the accessor object if the appropriate monitor is held. Then the accessor object throws an exception for each method if you use it without holding that mon
Re:Under the Rug (Score:2)
If your program have an awt/swing gui, your only way to stop the program is calling System.exit(). And if you call System.exit() your objects are not finalized (That is: The finalize method is not called)
Re:Under the Rug (Score:2)
That is not true, you just have to assure that the last (J)Frame is disposed of by calling dispose() on it.
Although there was a bug in a certain JRE version that didn't allow for GUI programs to finish correctly, even when all the frames were disposed, IIRC.
Re:Under the Rug (Score:2)
import java.io.*;
public class Test {
public static void main(String args[]) {
try {
File f=new File("JavaTestFile.txt");
if(f.exists()) {
System.out.println("Testfile already exists");
}
else {
ObjectOutputSt
Re:Under the Rug (Score:2)
Re:Under the Rug (Score:2)
> in C++ terms, to not doing any cleanup
> and relying on the OS to do it.
Yup. We've got an entire ruleset [sourceforge.net] in PMD dedicated to tracking down devious finalizer bugs.
Re:Under the Rug (Score:3, Informative)
I'll try to clear it up a little more with a better example. Say you're locking a file. In
Re:Under the Rug (Score:2)
Deprecated. This method is inherently unsafe. It may result in finalizers being called on live objects while other threads are concurrently manipulating those objects, resulting in erratic behavior or deadlock.
Enable or disable finalization on exit; doing so specifies that the finalizers of all objects that have finalizers that have not yet been automatically invoked are to be run before the Java runtime exits. By default, finalizatio
Re:Under the Rug (Score:2, Informative)
Java doesn't have destructors. It has finalizers, but they are called at unpredictable and occasionally awkward times (if at all). This presents a whole new challenge in terms of establishing invariants.
Consider, for example, the case of a mutex. In C++ I can encapsulate the acquisition and release of the mutex in an object (for an example of this, see boost::mutex::scoped_lock at www.boost.org), and know for certain that I will own the mutex object for the duration of the scope of the locking object.
Re:err, what? (Score:2)
Re:err, what? (Score:2)
Re:Under the Rug (Score:5, Informative)
GC takes over memory management, but leaves the other scarce resources -- file descriptors, sockets, mutexes, database connections -- to be managed manually, as in C.
Ruby has an interesting approach using closures to handle manual resource allocation. One calls the function that allocates a resource, passing it a closure. The function allocates the resource, calls the closure, and then deallocates the resource (even if an exception occurs). Here's how you might write to a file the manual way (I apologize for the lousy formatting; I don't know how to trick /. into indenting):
That's the usual way, easy to get wrong: What if an exception occurs? What if I forget to call close? Here's the better way, calling File.open and passing it a closure:
File.open might use this common idiom:
The "yield" calls the closure that was passed in, passing it the file object. The "begin...ensure" is like Java's "try...finally" construct, used here to make sure that the file gets closed whether the closure terminates normally or raises an exception.
This idiom doesn't solve all manual resource allocation/cleanup problems, but it's a pretty way to solve some of them.
I don't think Ruby invented this idiom, but I don't know where it came from. Perhaps Lisp: Everything seems to have come from Lisp.
Re:Under the Rug (Score:5, Informative)
Re:Under the Rug (Score:3, Insightful)
On the other hand,it can probably deal with transaction-style locks easier than RAII can - although I've seen a system to handle that that uses RAII objects combined with functors (instead of closures). It works almost identically to the Ruby model.
Re:Under the Rug (Score:3, Informative)
Re:Under the Rug (Score:2)
I agree that this is an excellent Ruby idiom... very handy.
Re:Under the Rug (Score:3, Informative)
This code
using(DisposableObj x = new DisposableObj())
{
x.DoSomething();
}
is equivilent to
DisposableObj x = new DisposableObj();
try
{
x.DoSomething();
}
finally
{
x.Dispose();
}
So as long as the object's author put any "close/release" code into the dispose method, it will get handled automatically when you are done with it.
(Even if an exception occurs!)
Most c# objects that have dispose() also have a close()
For example, databases. You can close() th
Re:Under the Rug (Score:2)
Any language with higher order functions (which you are calling "closures"--an implementation strategy for higher order functions) can imp
Re:Under the Rug (Score:2)
In Ruby, is there an analagous operation? How do you use it?
I don't remember seeing the transfer-of-ownership idiom used in Ruby. We make an object to handle a resource, and then just let everyone who needs the resource have a reference to the object. Garbage collection takes care of the rest (I suppose the garbage collector owns the resource; everyone else is just leasing it). That solves a big OO problem: Keeping track of who owns what. It gives us a different problem, which is losing control of w
Re:Under the Rug (Score:5, Insightful)
It depends on the language. Haskell, for example, has very different memory access patterns than Java. Being lazy, a value is produced only when it's time to be first consumed, at which point it often becomes garbage immediately. It follows that most of the garbage that a decent generational GC will be collecting will probably be in cache.
I'm one of those rarest of beasts, a programmer who regularly uses (and likes) both Haskell and C++. (Disclaimer: I'm not familiar with FC++, though from what I've read it doesn't really support lazy evaluation, which is one of Haskell's most important distinguishing features.)
From a reductionist point of view, of course, neither is more powerful than the other. However, even with Boost.Lambda and the likw, I still find Haskell almost always allows for far more rapid development than C++ does, all other things being equal. Naturally, all other things are rarely equal, and speed of development is not always the greatest concern, and I won't be drawn into ranking one of my two favourite languages over the other.
Re:Under the Rug (Score:2)
Re:Under the Rug (Score:2)
However, database or file allocations are much less frequent.(and in a well written program, very localized and isolated)
Just because you can't solve every problem, doesn't mean you can't solve the big one.
Re:Under the Rug (Score:2)
Re:Under the Rug (Score:2)
More like : I implement 10 linked lists/hashes/whatever, 50 arrays, and 10000 strings per program.
(regardless of the length of string/number of elements etc)
Therefore I have to get memory allocation right several thousand times.
On the other hand, I only open up a handfull of database connections, and usually only 2 files. Therefore it is much easier to isolate those and make sure they are working right. VS checking for Thousands of pure memory leaks.
Re:Under the Rug (Score:2)
Because this is C++, I am using "new" and "delete". On the surface these seem to be about memory allocation, but they are really abstractions for ge
Re:Under the Rug (Score:2)
However, even in the winforms world, in c# (and Java) you dont handle getting rid of brushes etc.
You can "early" dispose them using the dispose() methods we talked about before, but the GC is also smart. It takes into account relative co
Re:Under the Rug (Score:2)
Re:Under the Rug (Score:3, Informative)
Well put!
Another important consideration is that where the programmer has the expectation that his garbage will be cleaned up for him, he will tend to assume that all of his resources will be cleaned up. This is clearly not the case. The seminal example for me was the use of database query result sets in C# - if you don't explicitly close them, they tend hang around, and the next time you try to perform a query on the same connection, you'll as likely as not get an exception. Surprise!
Also, as some oth
Re:Under the Rug (Score:2)
Re:Under the Rug (Score:4, Insightful)
Side note: Boost Lambda and FC++ are impressive but ugly hacks with horrible syntax, lots of "gotchas" that make code not work (often related to operator precedence and order of evaluation), and compiler errors from hell. Probably not the best examples of the power of C++. (OTOH, maybe that makes them the perfect examples of the "power" of C++ ;-)
Re:Under the Rug (Score:2)
Re:Under the Rug (Score:2)
Now don't get me wrong, I like boost::lambda and use it a lot, but compari
Re:Under the Rug (Score:2)
But I think you're confusing two concepts here. Compile-time vs. runtime semantics is a whole other axis for comparing languages. We use C++ in circumstances where a "method not implemented" (in Smalltalk parlance) message thrown in the customer's face at random intervals is considered evidence of incompetence. We like compile-time verification and only wish we had more of it.
Of course it can be fun to have a compiler in your runtime system, and generate code for it on the
Re:Under the Rug (Score:2)
Actually generational collectors are fairly friendly w/r/t virtual memory.
GC takes over memory management, but leaves the other scarce resources -- file descriptors, sockets, mutexes, database connections -- to be managed manually.
Bogus argument. That's like saying steel-belted tires are more puncture resistant but they don't solve the problem of oil leakage so they're worthless.
Management is encapsu
Re:Under the Rug (Score:2)
If you like your memory management to be by reference counting, then they "get along fine without GC." But reference counting is an inefficient way to do memory management.
Also, I actually believe that copying garbage collection can be go
Re:Under the Rug (Score:2)
The goal is to enable encapsulation of resource management. In C++ one normally encapsulates management of all resources the same way. In most GC languages, as in C, you are left with no c
Re:Under the Rug (Score:4, Insightful)
For the same reason people in industry have kept programming in Cobol and Fortran, and for the same reason they keep producing software with all sorts of problems, bugs, and limitations.
A previous poster noted that most GC algorithms are distinctly unfriendly to virtual memory systems. They usually have similar problems with cache locality, which can result in an enormous slowdown, regardless of the time actually spent in the GC itself
Not true at all. Generational collectors generally achieve far better locality than malloc-style allocators.
A more fundamental problem is that memory is only one of many resources a typical industrial program must manage. GC takes over memory management, but leaves the other scarce resources -- file descriptors, sockets, mutexes, database connections -- to be managed manually, as in C.
Fundamentally, the point behind GC is not to make your life easier, it is to make it possible for the language to be safe. Without GC, a language with heap allocated mutable data structures just cannot be safe. GC generally cannot reliably manage any other resources besides memory and it is not meant to.
Given a resource management regime that can handle all these other important resources, as is commonly practiced in C++, memory becomes just another resource.
But memory isn't just any resource, memory is a resource that can contain machine pointers to other memory (as well as references to other resources).
The problem of resource management for memory is that of arbitrary directed graph reachability. And that is exactly the problem that a garbage collector solves, as efficiently as possible.
A language that lacks the tools necessary to implement such a regime needs GC, so the presence of GC may actually (as in the case of Java) indicate a fundamental weakness in the language.
C++ solves a common but limited subset of the resource management problem and then just declares victory. And even that false victory is not very satisfying because in order to achieve it, C++ has sacrified runtime safety. (In fact, with that choice, it has also sacrificed efficiency, but you aren't going to believe that no matter what I say.)
Re:Under the Rug (Score:2)
How insightful to know precisely what I do and do not understand.
Of course sloppiness in memory management (as practiced in all invisible GC systems) is far more easy to tolerate than mismanagement of other resources. Even yet, users complain legitimately about enormous memory footprints and code that spends most of its time idle, waiting on main memory. It may be that the majority of programs have no need to manage anything but memory, and for those,
Re:Under the Rug (Score:3, Informative)
Ah, yes. Any language which isn't procedural is "academia only".
There are many counter-examples, some of which I can't really talk about. Probably the most famous is Erlang, a functional language used to implement highly scalable, fault-tolerant telephone exchanges. Declarative symbolic languages like O'Caml are also widely used in bioinformatics, but somebody will probably dismiss that as "acade
Dilbert GC (Score:5, Funny)
a way to give the GC a hint? (Score:4, Interesting)
Re:a way to give the GC a hint? (Score:3, Informative)
is sufficient. If you really want to, you can call the garbage collector manually:
Re:a way to give the GC a hint? (Score:3, Informative)
Unfortunately, this doesn't force it to run. From the JDK 1.4.2 Javadocs for System.gc():
Note the "suggests". No guarantees...
Re:a way to give the GC a hint? (Score:2)
I've always wondered if the opposite would help. i.e. being able to say "never GC this allocation"
Many of my programs consist of objects that exist for the life of the program and every time the generation GC hits I picture it scanning over all of these permanent objects, forcing page hits, swapping, and possibly thrashing.
Re:a way to give the GC a hint? (Score:2)
GC has always been efficient (Score:3, Insightful)
There hasn't been a "discrepancy in efficiency". Good garbage collectors have been comparable to, or better than, manual storage allocators for decades.
The perception of a "discrepancy in efficiency" has several causes:
Re:GC has always been efficient (Score:2)
Re:GC has always been efficient (Score:2)
"if a non-GC program gets sloppy about storage management, it crashes, if a non-GC program gets sloppy about storage management, it just runs slowly." What is it? Does it crash or run slowly? It can't do both!
Re:GC has always been efficient (Score:2)
And "GC program" is shorthand for "program written in a language with garbage collection".
Re:GC has always been efficient (Score:2)
How about one of the earlier comments to the effect that mark-and-sweep type algorithms page-faults all the memory used by an application? That has got to be inefficient, and since virtual memory is not under the control of the
Re:GC has always been efficient (Score:2)
GC wins. And here is why...
Most implementations of "micro-managed" memory use the allocate/free model. Programmers are very careful to allocate what they need, and free it when done.
But... Allocation is usually very cheap. You have a big hunk of memory, and a "high water" mark. If the new allocation fits, just take it and advance the mark. Free is not so cheap. Blocks need to be coallesced (sp?).
GC approach is to give the memory (same low cost as allocate), and simp
Re:GC has always been efficient (Score:2)
The object handling the pool keeps some free space in the pool so that a burst of new small allocations won't cause a 'real' malloc.
This technique is used by KDE. It's all done behind the scenes, so as a developer, you don't need to worry about it too much.
Rik
Re:GC has always been efficient (Score:2)
Yes, and the same technique works in garbage collected systems and has been used for pretty much as long as garbage collectors have been around. Good garbage collectors actually give you extensive APIs to manage areas and pools in a way that the GC knows about and c
Re:GC has always been efficient (Score:5, Insightful)
What about them? Real-time garbage collectors give you guaranteed real-time responses.
I suspect that you have actually never used a real-time storage allocator of any form. The memory allocators that ship with major C/C++ compilers certainly make no real-time guarantees. The way people usually get real-time performance out of them is by pre-allocating large chunks of memory. Well, you can do in garbage collected languages as well.
GC are generally non-deterministic (they start and finish according to their own rules),
No, they don't. Just like with malloc implementations, their behavior may differ from implementation to implementation, but it is generally pretty well understood. It can usually also be controlled well.
Simple garbage collectors only will start a garbage collection when you ask for a block of memory and it can't satisfy the request; they don't just start up for no reason at all. Parallel garbage collector may run a thread in parallel to the main program but never stopping it. Incremental collectors do a little bit of work each time you allocate. Real-time collectors guarantee well-defined maximum responses for allocation.
If the garbage collector in your language (Java?) doesn't do what you want, it's not a problem with garbage collection in general, it's a problem with the specific implementation your vendor has chosen to give you. Just like there are mediocre or bad malloc implementations, there are mediocre or bad garbage collectors.
How about one of the earlier comments to the effect that mark-and-sweep type algorithms page-faults all the memory used by an application? That has got to be inefficient, and since virtual memory is not under the control of the application by definition there is nothing that can be done, except if the GC is directly under the control of the OS, which doesn't often makes sense (it's not very flexible then).
Well, that comment is wrong. First of all, you don't have to use a mark-and-sweep collector. Most high-performance collectors are, in fact, generational and are very VM friendly (moreso than malloc/free in many cases). Second, operating systems have interfaces to their VM subsystems, so the GC can, in fact, control what is happening with paging--prefetching pages, etc. And they do. Even 20 years ago, Berkeley UNIX had system calls specifically designed to let Franz Lisp let the kernel know what it was doing. Third, a malloc implementation cannot move pointers around to make accesses more local or sequential--good garbage collectors do, so GC is actually superior in that regard.
The article itself says that there is no way to make a GC perform as well or better as a finely tuned hand-micro-managed in every case.
You can "micro-manage" and "fine tune" in the presence of a GC as much as you can in its absence. But in the presence of GC, you have the freedom to be sloppy and your code will still run--so many people don't bother. In C/C++, you don't have a choice.
In languages that don't have GCs you can add one yourself (Bohm's GC works fine for C/C++, and is in fact used for GCJ, the GNU implementation of the Java language), with the benefit that you can turn it off if you don't want it for some reason, something you can't do in Java for example.
No, that is backwards. In languages without GC, you cannot add a GC and get all the benefits from the GC. Boehm's GC, for example, may retain arbitrary amounts of garbage, and its lack of integration with the language and compiler means that it can't be anywhere near as efficient as an integrated GC. Boehm's GC is a great hack, and it work really well, but it is not something you can ultimately rely on. Furthermore, if you add Boehm's GC to a language without GC, you are still left with an unsafe programming language.
Secondly, languages with garbage collection often give you full control over the GC: you can enable it or disable i
Re:GC has always been efficient (Score:2)
How about one of the earlier comments to the effect that mark-and-sweep type algorithms page-faults all the memory used by an application?
Not all garbage collectors are mark-and-sweep. Also, copying GCs have much better fragmentation than malloc, so if you're concerned about kee
Feels like (Score:3, Interesting)
Although, the point the author made about CPU's being cheaper and faster and how this is allowing the programmer to care less and less about mechanics so the can make use of this extra power to make programming a more expressive rather than mechanical practice is interesting.
Personally, I see no problem with one day having high level application programmers who know nothing of hex, memory management or physical hardware but rather algorithms, computability and productions, etc. Of course, there will always be a place for the "computer programmer", but also a place for the "analytical abstractionist engineer".
The GC pitfall (Score:5, Insightful)
One pitfall that I've noticed basically comes along with the benefit of avoiding "micro-managed" explicit memory management -- there are a lot of Java coders who don't think at *all* about memory management, because they think it's all handled for them. Mix that in with an over-excitement about OO, and you get some impressively slow and non-scaleable code.
You DO need to understand, at least on a basic level, what's going onto the heap, and what the garbage collector has to do to keep up with your "garbage". Carefully nulling out objects that are going to be out of scope in a millisecond is just wasting space, but you should definitely keep an eye on what objects you're allocating within that loop that runs a million times. They're all going on the heap; are they all going to be on there at the same time? When are they going to be eligible for collection? Are they just Strings, or larger objects (which possible create other objects when they are created)?
If you have to optimize a section of code, consider sticking to primitives and Strings (obviously you're balancing this against the cost of possibly less-maintainable code!), and don't forget that when you instantiate com.foo.Bar, all of its superclasses are also instantiated, including any member objects they hold. And don't make a variable static for no reason -- it won't get collected with the object instance....
Two useful things to think about -- heap size (the objects you're actively using at a given moment, so they can't be collected), and churn rate (how fast you're creating and trashing objects). Object creation/destruction isn't as costly as it was with the early versions of Java (no, you probably don't need that Thread pool!). But any application that needs to scale requires some thought on memory usage and churn before you start coding.
Re:The GC pitfall (Score:3, Interesting)
While you are entirely right, this is no differnt from previous generations of programming languages. You always do better if you have a bit of understanding of the wiring behind the board.
I'm sure that there were objections to high-level languages by assembler coders who objec
Re:The GC pitfall (Score:2)
Fair enough, though if you don't understand memory management in C you will know it, because you'll have massive memory leaks or serious, noticeable bugs.
I think I mistakenly gave the impression that I don't like GC. I love it, and I think it's definitely worth the few drawbacks -- I just wanted to point out that it's not a silver bullet. And I do get frustrated with inexperienced programmers who speak s
Re:The GC pitfall (Score:2)
Yeah, I think I clicked "Reply" with the idea of giving a few GC pointers, and ended up with a weird kind of elitist rant. I love GC, and it's totally worth the few tradeoffs (which are more education issues than anything else).
You might be minimizing the number o
Reference counting (Score:3, Informative)
I took a bit of time to go and read Wikipedia's page [wikipedia.org]
In the description they give, they mention that reference counting GC can represent managed objects by directed graphs.
I know there exists algorithm to find cycles in such graphs. So I suppose these could be applied to this problem. Other proposal are to use a tracing GC to detect them. To which it was replied that this would be able to reclaim the memory but not to properly finalize the objects. I don't see why that would be true. I mean, if you have found a member of the cycle to be collected, can't you just finalize that one and let the whole cycle unravel itself ? If there are cycles inside that cycle, just do it again on these etc
As I said, another common objection was the cost of updating the counters in multithreaded environnments. Multiple solutions have been proposed, some more portable than others (using processor/platform specific atomic increments, or deferring the update until it is really necessary and using the standard mutex protection)
All this said, I try to understand a couple of things.
-I am no genius, thus these ideas must not be new, what is the problem which can't be solved with these?
-Reference counting seems to integrate better in the runtime of the program. All the other techniques proposed seem to imply some monolithic operation on the memory summing up all the overheads at on time and doing the cleaning once in a while, with the possibility of becoming a bottleneck in heavily loaded systems. Reference counting OTOH seems to allow the cleanup to continually add a little bit of overhead to the system but nothing which will bring the whole thing to a grinding halt before allowing it to go on. What have I missed?
Relevant references (Score:3, Informative)
A couple of relevant references for garbage collection are the following website (which unfortunately hasn't been updated for a while - still, it's useful):
The Memory Management Reference [memorymanagement.org]
and of course Jones and Lins book, Garbage Collection: Algorithms for Automatic Dynamic Memory Management
Java doesn't have *a* garbage collector (Score:4, Informative)
Also, the default collector is a 3-generation one, not 2, at least as of Java 1.4.1. More details here [sun.com].
Re:Reference Counting... (Score:3)
One of my favorite little sayings that applies here. I suggest you look for the original if you haven't seen it, I can't paraphrase it as good.
Circular References (Score:3, Insightful)
Re:Circular References (Score:2)
This is completely incorrect.
The mark/sweep algorithm will catch unreachable circular references--just read the article. All the popular java VMs and MS's .NET implementation GC unreachable circular references.
Re:Reference Counting... (Score:2, Informative)
Refcounting can't collect everything if you have any circular references. It's
what Perl5 has, and we live with it, but Perl6 is getting real garbage
collection (mark-and-sweep I think, or at any rate something more advanced
than refcounting).
Re:Reference Counting... (Score:5, Insightful)
It can be.
Let's ignore circular references for a moment. To be honest, cycles don't turn up as often as people claim in programs where reference counting is done manually (or through smart pointers) because people are smart enough to know the issues and avoid them (e.g. by using weak references or other non-owning pointers to break cycles).
For a start, reference counting interacts badly with multithreading. The reference count has to be protected against concurrent updates, and that can cost a lot, especially if the count is already effectively protected in some other way (e.g. by only being used single-threadedly). This is such a problem that many C++ library vendors are doing away with reference counting in their std::basic_strings.
Secondly, every time you copy a pointer, you modify the reference count. Every single time. Sometimes (e.g. if you take a temporary local copy) that will be in cache, but not always. If there's contention between CPUs (see previous point), for example, the count will bounce between them. Sometimes it's an almost guaranteed cache miss.
Admittedly, this isn't such a big problem in C++-implemented reference counting, because the programmer is usually far more aware of what's going on with pointer copying and will go to some lengths to avoid copying, but it can cost if reference counting is automatic. Have a look at the Python source code some time and see just how much trouble it goes to to avoid manipulating reference counts.
Re:Reference Counting... (Score:4, Interesting)
Also, in many environments you DON'T modify the reference count every time you copy a pointer; there's a concept called "borrowed references" which is used in Python, COM, and many other ref count schemes to avoid some useless refcounts.
Python (pre 2.0) used to do only refcount, and did it much better than Java (using GC) in all respects except thread friendliness. Modern python (2.0 and beyond) does both -- but it's extremely rare for the gc to be needed at all.
Re:Reference Counting... (Score:2)
Finalization (Score:2)
Re:Education? (Score:3, Insightful)
Re:The extreme situations are the only ones ... (Score:3, Interesting)
Wow, with this attitude I can see why you are worried about keeping your job.
Did you know that GCC uses a garbage collector? They found it too difficult to manually manage memory. Is GCC a trivial application?
In my program we write loads of decidedly non-trivial software all of the time that not only tolerates, but benefits greatly from GC.
Garbage collection is not appropriate for every task, bu