Fast Native Eclipse with GTK+ Looks 300
Mark Wielaard writes "The gcj team created a natively compiled build of the Eclipse IDE. The resulting binary starts up faster then with any traditional JVM since there is no virtual machine to initialize or slow byte code interpreter or just in time compiler involved. This means that gcj got a lot better since the last Slashdot story in December about gcj and Eclipse. Red Hat provides RPMs for easy installation. Footnotes has screenshots by Havoc Pennington of the Eclipse IDE with GTK+ widgets."
Re:Troll...Troll...Troll... (Score:5, Informative)
Java is still some 40 times faster than Python. As hardware speed continues to obey Moore's law, performance per clock cycle becomes less important. (For many areas of computing anyway.)
Good for QNX (Score:3, Informative)
Isn't that how .NET languages like C# work? (Score:3, Informative)
I don't know how well it works but I can see the potential.
Re:Plugins? (Score:4, Informative)
That was the case last time I looked at GCJ about a year ago. I ended up being unable to use it because of lack of windowing toolkit support. Anyone know the status on all that?
Re:load times (Score:3, Informative)
Of course, java tends toward handling this differently, where java programs invoke each other inside the same runtime, but it makes mixing java and non-java tools annoying.
Re:load times (Score:4, Informative)
I wrote a simple JVMPI method tracer. It's mind-blowing what all happens before your code is actually run. Here's a method trace I just ran with 1.4.2 for a simple program. [visi.com]
-Kevin
Re:Startup sure, but how fast does it run? (Score:5, Informative)
Of course, I don't see myself as a "Java programmer" or a "carpenter" or a "brick layer". I wouldn't take any pride in that. I have a degree in computer science...
To further extend your knowledge in computer science, look into the Internet tool called "Google". Using it can save you from ridiculing yourself by publicly posting uneducated statements:
Embedded Java [google.ca]
Re:Microsoft's IDEs? You have GOT to be kidding (Score:3, Informative)
Re:Total GCJ performance (Score:3, Informative)
You get all the advantages of Native compilation, and system specific JIT compilation combined - at the expense of more complexity and lots more disk space.
As others have mentioned, M$ is attempting to do a similar thing with .NET. Whether they can do as good a job without charging AS/400 prices remains to be seen.
As wonderful as all this is, I still like direct bytecode execution because it minimizes memory use and startup - which is more important than CPU for many applications (business logic and embedded). Native compiled code is quite a bit bigger than bytecode, uses more memory, and takes longer to load (though not as long as JITing the bytecode).
Re:Startup sure, but how fast does it run? (Score:1, Informative)
You can't expect the average slashdot reader to understand these things too. It's much "cooler" to complain about Java speed and it is to actually learn somehting.
These kinds of discussions always occur whenever Java or IPv6 is the topic. I remember back when I was young we actually embraced new stuff, especially new stuff that is cool, and useful, and... well I've talked for too long.
GCJ performance is a myth. Benchmarks inside. (Score:5, Informative)
Often the JVM will out-perform GCJ with a factor of 3. Check out the numbers on this page [shudo.net].
I fail to see why people want to run a GCJ compiled evrsion of a development tool and run at at one third of the speed of the JVM, just in order to save a few seconds of startup time.
Re:Memory Usage vs Eclipse Running in JVM (Score:5, Informative)
Eclipse 2.1.1 with JVM
- Second start: 13 seconds
- Memory Eclipse: 80 MB
- Memory JVM: 65 MB
Eclipse 2.0.1 without JVM
- Second start: 9 seconds
- Memory: 96 MB
The download page seems to indicate you're downloading an Eclipse 2.1.0 version, but the about dialog says 2.0.1. Which one is it?
Cheers,
Thimo (back to coding in Eclipse 2.1.1
Re:Why JVM? (Score:5, Informative)
Static types perhaps, but very dynamic when it comes to linking. Java has a lot of support for things like dynamic class loaders that lead to a very nice plug-in architecture, and extreme flexibility when it comes to deployment of code updates to a running application. Not to mention fun with diddling bytecodes on the fly.
Re:Why JVM? (Score:3, Informative)
I don't think 'Dynamicity' is a word. But hey, enlighten me?
>>>>>>>>
It's not a word. But I'm referring to how dynamic the language is. In Java, dynamic method invocation is very limited --- just basic single argument polymorphism. Other languages have much greater dynamic capabilities: For example, in Dylan (chosen because its syntax is easier for most people to follow), I can do the following:
let vec = make(<vector>, size: 100);
//
for(i from 0 below 100)
do-foo(vec[i]);
end;
Now, this will go through the vector, and call the appropriate version of do-foo for each time in the vector. The cool thing is that it doesn't matter what the types in the vector are, or how they're related. If they have a do-foo() defined for their class, the runtime will do the right thing automatically. In Java, to get the same effect, you'd have to use check to type of each variable, cast it to the right type, then call do-foo() for each type that could be in the vector.
Nup, not even. See 'instanceof' - which, although considered hackish among OOP elites, gives volumes compared to using void pointers in C. Then there's the whole polymorphism thing, but hey - C is procedural.
>>>>>>>>>
I said "almost" as inflexible as C. Certainly, its not any more flexible than C++. But either way, its not what I mean by flexible. Now, let's continue the previous example. The last Dylan example would run rather slowly, about at the speed the "giant switch on types" Java version would run. Now, if you don't need to store multiple types in the same vector, you can simply use a limited type. Just change one line to:
let vec = make(limited(<vector>, of: <integer>));
Now, as long as there is a do-foo() method defined for integers, the compiler will automatically specialize <vector> (think templates in C++) for the type, detect it can now statically dispatch the do-foo() method (because the argument will always be an <integer>) and most likely inline the do-foo() method into the loop. As a result, the loop will benchmark within spitting distance of C++ using the vector<int> template.
Someone correct me if I'm wrong, but AFAIK the JVM acts as a sandbox for Java applications/applets, stopping those which don't have the necessary permissions for privileged operations. This adds volumes to safety.
>>>>>>>>>
It only needs to do security checks because it is a platform as well as a language. In a natively compiled language, those security checks would be handled by the underlying OS. The main thing the underlying OS can't do is memory checking, which is why the JVM does bounds checks and whatnot. But compiler technology has advanced so far that you don't need something like the JVM to look at each memory access before allowing or disallowing it. Lots of safe languages (again, Lisp, Dylan, even C in the case of the SAFEcode project) are compiled to native code, with the compiler emiting only a few bounds checks here and there.
You also forgot the biggie: portability. C and C++ are portable to a degree, but require recompilation.
>>>>>>>
Lots of languages are natively compiled and fully portable at the same time. The requirement for portability isn't running on a JVM, but preventing platform-specific pointer manipulation, as well as specifying sizes for various objects. Again, there are lots of other languages that do this! Heck, even C++ is pretty good at this, as long as you avoid "implementation specific" operations (which are clearly marked as such). The only reason that Java seems more portable is the giant standard class library. You can get largely the same result by using Qt and some well chosen libraries like ACE.
If you want speed, get Linux and gcj it (never actually tried gcj trolls - sue me if it sucks
Re:Startup sure, but how fast does it run? (Score:3, Informative)
class bar(object):
def foo(self, obj):
obj.register(self)
bar::foo() is completely generic. It can register itself with any object, as long as that object supports the register() method. In Java, you'd have to define an interface and have all possible objects you're interested in registering with implement that interface.
Object oriented languages have similar capabilities, but they do *not* all offer the same abstraction capabilities. Java is a lowest-common-denomenator OO language. Its got classes, single inheritence, single-parameter polymorphism, and that's about it. It certainly doesn't have the abstractive capabilities of C++ (with templates) much less something even higher level like Python.
Re:GCJ performance is a myth. Benchmarks inside. (Score:3, Informative)
They don't have figures for memory usage, installation profile, etc., and I bet in those areas GCJ beats Hotspot for end-users. And you can't beat an install with no library dependencies.
Re:Startup sure, but how fast does it run? (Score:2, Informative)
Fast. Was:Startup sure, but how fast does it run? (Score:3, Informative)
BitTorrent url (Score:2, Informative)
Re:load times *do* matter in the real world (tm) (Score:4, Informative)
jit is not a slowdown... (Score:3, Informative)
Just so you all know. A good jit/hotspot compiler can make things quite a bit faster than any static build compiler. This is because at runtime certain optimizations are possible that simply cannot be done at build. These optimizations are typically very aggressive, to that extent that later on in execution it may turn out so that those snippets have to be thrown away and recompiled. And thus they usually target only the hotspots, i.e. portions of code that are being executed inside tight loops, which is usually where compile time savings occur anyway. The speed of the rest of the code is pretty much governed by the programmer anyway along with his choice of algorithms and data structures.
A good example of such snippets would be the inlining of a virtual method, turning a virtual method recursion into a loop along with some unrolling, inlining function pointer call (which is basically the same as virtual method call).
Mind you, ofcourse a hotspot compiler could also be implemented to a C/C++ runtime environments, but I haven't heard of anyone actually taking that path.
(for the record, I'm a C programmer and rarerly write much with java)
Re:Isn't that how .NET languages like C# work? (Score:2, Informative)
"If what you said is true, we wouldn't be able to copy an executeable from one computer to another. It would have to be installed."
The native image is cached, it doesn't overwrite the