Performance Benchmarks of Nine Languages 954
ikewillis writes "OSnews compares the relative performance of nine languages and variants on Windows: Java 1.3.1, Java 1.4.2, C compiled with gcc 3.3.1, Python 2.3.2, Python compiled with Psyco 1.1.1, Visual Basic, Visual C#, Visual C++, and Visual J#. His conclusion was that Visual C++ was the winner, but in most of the benchmarks Java 1.4 performed on par with native code, even surpassing gcc 3.3.1's performance. I conducted my own tests pitting Java 1.4 against gcc 3.3 and icc 8.0 using his benchmark code, and found Java to perform significantly worse than C on Linux/Athlon."
Java Performing worse then C (Score:5, Insightful)
Why is this a suprise? C has been most commonly used for so long because of it's speed and efficiency. I think anyone who has done much work with either developing or running large scale java programs knows that speed can definitely be an issue.
Under Windows... (Score:3, Insightful)
Re:Trig functions... (Score:3, Insightful)
This is (once again) proof that Java is not slow, in fact it's really fast. It's slow starting, and yes, consumes more memory than native code, but the gained security, stability and ease of programming (reduced development times) are worth the memory use increase.
Also, the memory use should be addressed by project Barcelona (I believe these will be available in the forthcoming J2SDK 1.5, along with generics, enums, etc).
Re:Trig functions... (Score:5, Insightful)
Don't forget that it is also percieved as slow since just about any application anyone has seen for a desktop environment written in Java has a sluggish GUI.
Yeah, I know Java's strengths aren't in the Desktop arena, they're in development and the back-end.
Re:Java Performing worse then C (Score:5, Insightful)
All that matters to anti-Java zealots is speed. The list of benefits coming from using Java is too long to take the speed-only view seriously.
this is just so bogus (Score:5, Insightful)
Where is Fortran? (Score:3, Insightful)
It's a pity that the present-day language of choice for high-performance computing, Fortran 90/95/HPF, was not covered in this study. There has been anecdotal evidence that C++ has approached Fortran, performance-wise, in recent years, but I've yet to see a proper comparison of the two languages.
About the Python performance (Score:5, Insightful)
Comment removed (Score:4, Insightful)
What about development ease... (Score:2, Insightful)
IMO a program should use whatever tools are available and appropreate for the job, and not just worry about what is faster.
Re:What about coder's performance? (Score:3, Insightful)
I like python, it is easy to write, and keep it somewhat clean.
Read the OSNews thread (Score:5, Insightful)
Namely:
- They only test a highly specific case of small numeric loops that is pretty much the best-case scenario for a JIT compiler.
- They don't test anything higher level, like method calls, object allocation, etc.
Concluding "oh, Java is as fast as C++" from these benchmarks would be unwise. You could conclude that Java is as fast as C++ for short numeric loops, of course, but that would be a different bag of cats entirely.
Re:Wow (Score:4, Insightful)
-t
Not testing languages (Score:4, Insightful)
wrong questions (Score:4, Insightful)
So, yes, you can construct programs, even some useful compute intensive programs, that perform as well or better on Java than they do in C. But that still doesn't make Java suitable for high-performance computing or building efficient software.
Benchmarks like the one published by OSnews don't test for these limitations. Microbenchmarks like those are still useful: if a language doesn't do well on them, that tells you that it is unsuitable for certain work; for example, based on those microbenchmarks alone, Python is unlikely to be a good language for Fortran-style numerical computing. But those kinds of microbenchmarks are so limited that they give you no guarantees that an implementation is going to be suitable for any real-world programming even if the implementation performs well on all the microbenchmarks.
I suggest you go through the following exercise: write a complex number class, then write an FFT using that complex number class, "void fft(Complex array[])", and then benchmark the resulting code. C, C++, and C# all will perform reasonably well. In Java, on the other hand, you will have to perform memory allocations for every complex number you generate during the computation.
Re:Trig functions... (Score:5, Insightful)
It's in many ways unfortunate that with JDK 1.2 (Swing) and onwards, Sun pretty much dumped fast native support for GUI rendering. It has its benefits -- full control, easier portability -- but the fact is that simple GUI apps felt faster with 1.1 than they have done ever since (or even more). This is, alas, especially noticeable on X-windows, perhaps since often the whole window is rendered as one big component as opposed to normal x app components (in latter case, x-windows can optimize clipping better).
Years ago (in late 90s, 97 or 98), I wrote a full VT-52/100/102/220 terminal emulator with telnet handling (plus for fun plugged in a 3rd party then-open SSH implementation). After optimizing display buffer handling, it was pretty much on par with regular xterm, on P100 (Red hat whatever, 5.2?), as in felt about as fast, and had as extensive vt-emulation (checked with vttest). Back then I wrote the thing mostly to show it can be done, as all telnet clients written in Java back then were horribly naive, doing full screen redraw and other flicker-inducing stupidities... and contributed to the perception that Java is and will be slow. I thought it had more to do with programmers not optimizing things that need to be optimized.
It's been a while since then; last I tried it on JDK 1.4.2... and it still doesn't FEEL as fast, even though technically speaking all java code parts ARE much faster (1.1 didn't have any JIT compiler; HotSpot, as tests show, is rather impressive in optimizing). It's getting closer, but then again, mu machine has almost an order of magnitude more computing power now, as probably does gfx card.
To top off problems, in general Linux implementation has been left with much less attention than windows version (or Solaris, but Solaris is at least done by same company). :-/
Less simple benchmarks (Score:5, Insightful)
The optimisers in sun's Java VM work on run-time profiling - they identify the most run sections of code and use the more elaborate optimisation steps on these segments alone.
Benchmarks that consist of one small loop will do very well under this scheme, as the critical loop will get all of the optimisation effort, but I suspect that in programs where the CPU time is more distributed over many code sections, this scheme will perform less well.
C doesn't have the benefit of this run-time profiling to aid in optimising critical sections, but it can more afford to apply its optimisations across the entire codebase.
I'd be interested to see results of a benchmark of code where CPU time is more distributed..
Poor benchmarks (Score:2, Insightful)
Comment removed (Score:3, Insightful)
Re:Under Windows... (Score:5, Insightful)
performance depends on the application (Score:2, Insightful)
I've seen examples of gcc in a cygwin shell kicking visual-c++ ass at load up times of huge image data on a wintel box. I've also seen java (jdk 1.3) annihilate native c code on console apps calculating complex mathematical formulas on a linux box. This goes for both AMD and Intel chips.
Moral of the story? These languages are all suited to specific uses. Analyze your tasks, your platforms, and your compilers. Learn how to use optimizations properly. Evaluate your need for portability. Do a few tests for performance in different languages and compilers to see which one actually is fastest for your current application.
There is no single "fastest" language.
Re:Speed? No. Stability. Yes (Score:4, Insightful)
But what about type safety? Java has no generic typed containers, like the STL. This means you tend to find errors at runtime instead of at compile time.
I need to know that my code is as safe as possible. I don't want a user to find a bug because my hand tests didn't get 100% code coverage every time.
And how about predictable performance. I would much rather know that this function will tak 200ms all of the time instead of 100ms most of the time a 10 s due to garbage collection occasionally.
Re:java vs C (Score:2, Insightful)
For general business processing applications and most web applications, efficiency is less of a concern and cost/time-to-market/maintainability/security are bigger.
I like these benchmarks but would like to see ones that also benchmark the other characteristics of languages (such as lines of code to do a common task, number of tests that need to be performed to validate common functions, memory space, etc. etc.)
Language performance arguments miss the point (Score:5, Insightful)
Then, with more and more languages, especially ones with VMs, you get further and further away from the hardware. The end result: you lose performance. It does more and more for you, but at the expense of real optimizations, the kind that only you can do.
Now the zealots will come out and say, "Language X is better than language Y, see!" To me this argument is boring. I tend to use the appropriate tool for the job. So:
Yes, my teams use many languages, but they also put their effort to where they get the biggest bang for the buck. And in any business approach, that's the key goal. You don't see carpenters use saws to hammer in nails or drive screws. Wise up!
Re:They should benchmark development time (Score:5, Insightful)
The advantages over Java are even increased 6 months down the road. Python code is much more readable and maintainable, hence easier to extend. Dynamically typed object model scales incredibly well.
I used to think the same about Perl vs Java, until I started looking at frameworks like Cocoon and they're all written in Java.
Comparing Perl to Java is foolish, Perl is more like Awk than a general purpose programming language, and not meant for large projects at all.
Re:They should benchmark development time (Score:2, Insightful)
If the big concern is speed, why not go to the basics and code in assembly (or machine code if you're crazy). Implement some algorithms that will give you a desired precision and that will use the memory that is within the resources. Heck, just use up all your memory resources and create a huge lookup table...
-I do not move. The world moves around me.-
Re:Trig functions... (Score:5, Insightful)
Totaly missing parts (Score:2, Insightful)
It's pretty stupid to run benchmarks for a language in a non native environment for the python marks.
Yet again OS News publishes a completely meaningless story.
Re:Language performance arguments miss the point (Score:2, Insightful)
Windows a good choice for this test (Score:3, Insightful)
These kind of benchmarks are so 1970s (Score:4, Insightful)
Here's a bombshell: if you have a nice language, and that language doesn't have any hugely glaring drawbacks (such as simple benchmarks filling up hundreds of megabytes of memory), then don't worry about speed. From past experience, I've found it's usually easy to start with what someone considers to be a fast C or C++ program. Then I write a naive version in Python or another language I like. And guess what? My version will be 100x slower. Sometimes this is irrelevant. 100x slower than a couple of microseconds doesn't matter. Other times it does matter. But it usually isn't important to be anywhere near as fast as C, just to speed up the simpler, cleaner Python version by 2-20x. This can usually be done by fiddling around a bit, using a little finesse, trying different approaches. It's all very easy to do, and one of the great secrets is that high-level optimization is a lot of fun and more rewarding than assembly level optimization, because the rewards are so much greater.
This is mostly undiscovered territory, but I found one interesting link [dadgum.com].
Note that I'm not talking about diddly high-level tasks in language like Python, but even things like image processing. It doesn't matter. Sticking to C and C++ for performance reasons, even though you know there are better languages out there, is a backward way of thinking.
Re:They should benchmark development time (Score:3, Insightful)
I don't know much about Python and I'll give it a go when I get a chance, but it's really hard to take your comments seriously when you call Python a "Silver Bullet" in your sig
Re:What about coder's performance? (Score:5, Insightful)
Performance realities do not go away, no matter how much we may wish they would. Now, does that mean you're going to go write major portions of your web application in assembly to speed it up? No, probably not. But your database vendor may very well use some tricks like that to speed up the key parts of their database. You sink or swim by your database, so don't say it doesn't matter because it absolutely does.
Anyway, in my day-to-day operations, I can think of quite a few things that get compiled directly to executable code even though they don't have to be. Why would you do this if performance wasn't an issue and we could just throw more hardware at it?
1. Regular expressions in the
2. XSL transformations in the
3. The XmlSerializer classes creates a special compiled executable specifically created to serialize objects into XML (byte code!!).
And the list just goes on and all of this eventually ends up getting JITed as well. My pages are 100% XML based, go through many transformation steps to get to where they need to be, and on average render in about 70-100ms (depending upon the amount of database calls I need to make and the size of the data). This all happens without spiking our CPU utilization to extreme levels. There is *NO WAY* I could've done this on our hardware if nobody cared about performance.
As always, a good design is the most important factor. But a good design that performs well will always be superior to one that doesn't.
Bryan
Re:Sitting on a Benchmark (Score:3, Insightful)
Our beloved Penguins can swim quite well under Linux^H^H^H^H^Hwater, thankyou!
Re:Trig functions... (Score:2, Insightful)
On a realer note, the JVMs are written in "C" a fact that some people just don't seem to understand. A Java program when running is a form of a C program. Thus, their is no reason to have slower math functions except that the JVM was poorly written.
The whole comparison of non-graphic Java to C or C++ is moot as C or C++ is the basis of all JVMs I know of.
G++? (Score:3, Insightful)
Re:Trig functions... (Score:2, Insightful)
What's your point? One could argue that any program when running is a form of machine code thus they should be running as fast as they possibily can. (Which is true of course!)
Re:Language performance arguments miss the point (Score:2, Insightful)
I don't really agree with this, look at it from a high-level API stance for starters, I'd much rather write some DirectX or OpenGL than write the assembly code necessary to cover my bases with all the 3d hardware out there - with no guarantee my 3d code would work on future hardware. The good old days of calling a BIOS interrupt to put your display into 13h and writing direct to the video memory at (320*y)+x are dead and buried, unfortunately (I'll admit those were fun times
The above is a somewhat extreme example of how low-level code can be not only inefficient (unless you're *seriously* hard-core) but utterly pointless due to the inordinate amount of time it would take to write said code. It's an extreme example, but it translates almost directly to today's processors - being the complicated beasts they are. You look at things a good optimising compiler will just do for you completely transparently, like branch prediction, mmx, 3dnow, and a host of others (recently, and notably, hyperthreading for instance)...
If you think you can do a better job of writing low-level code than these compilers can do of optimising your high-level, you're either still living in the early 90s, or you're one hell of a programmer.
Re:Java as fast as c++????? (Score:3, Insightful)
Re:Under Windows... (Score:4, Insightful)
The review article is /.ed now, but from the test names on the summary table it looks like the tests are indeed mostly numeric. Unfortunately, only a small minority of people make their living writing number crunching code.
For the vast majority of business and web-based apps, the bulk of operations involves string manipulation. If an app is compute intensive and not I/O or GUI bound, then the bottleneck is usually creating, modifying and destroying strings. Benchmarks on string handling would be more useful to most developers.
However, doing string manipulation benchmarks isn't so simple. There are at least four approaches to strings, and some languages let you pick any of these:
-- dangerous and very fast: using static buffers and in-place modifications like old-school C
-- somewhat safer and may be fast: using semiautomatic memory management with mutable strings, like C++/STL or C with glib's g_string
-- safer still: using totally automatic memory management with mutable strings, like Ruby or (IIRC) Perl
-- safest: using totally automatic memory management with immutable strings, like Java or Python
Of course, for each problem the algorithms would need to be structured differently to get the maximum possible speed in each of the above four methodologies.
Basically, for string-intensive code, claiming that Java is just as fast as C will always be a false statement if you compare C code written in the first dangerous style vs. Java, which is always written in the fourth and safest style. No matter what technical tricks the VM writers come up with, there is just no way that they'll be able to match C's ability essentially zero-overhead in-place buffer operations over and over in the same spot that stays loaded in the L1 cache. (Actually, you probably could write Java code that operates on raw character arrays, and it might approach the speed of C. But that would probably look even uglier than the C code.)
In the few cases that I've ported a string-intensive high-level-language algorithm to raw low-level C code with few or no mallocs (not a trivial task), I've gotten at least a 10X speedup on the CPU-bound tasks, and at least 10X less memory usage. (Note that I did those tests largely out of curiosity. For most applications, even a 10X speedup is rarely worth the increased development time, bug vulnerabilities or maintenence issues. My opinion is that if you have to write code like this, you should confine it to a C extension library to a high-level language like Python.)
I've found that STL can be faster or slower than Java, depending on how smart you are. It's very easy to inadvertently get C++ to thrash around with needless automatic data copying.
Languages like Perl and Python can be very competetive on string operations if you know how to use their libraries. By using the most powerful operations that work on the largest chunks of data at one time (Python's re.findall(), for example), you take advantage of the fact that the library call is mostly written in C. Bit-banging in a dynamic interpreted language is usually dog slow, as the Python numbers seem to show on the summary chart.
To sum it up, most people write apps whose performance can't be predicted by a few simple language benchmarks, because the way the app is written can affect the performance more than the language it's written in.
Re:These kind of benchmarks are so 1970s (Score:3, Insightful)
Today, everything is in script because it's not worth the bother anymore. In 1998 I had to write my own affine transformation code in C to get a GUI to work at anywhere near real-time. Today I can run a planetarium simulator (read LOTS of calculations) at an acceptable speed in just script.
Re:They should benchmark development time (Score:2, Insightful)
Ummm.. Slashdot is written in Perl, as are many other large projects. I've yet to see anything like Slashdot written in Awk.
Re:Trig functions... (Score:3, Insightful)
So in a benchmark comparing compiler performance I can't see how that is "moot".
Java benchmarks are flawed. (Score:3, Insightful)
2) Java's IO function work on UTF-8 or other system dependant character set. So in essence java is doing twice the ammount of work during the IO benchmark.
I'm sure other people will comment as well, but overall these numbers are not that suprising for code that was just copy and pasted from c code. Why do people expect that ANY language will perform well using another languages code.
Cost of Hardware vs. Cost of wetware (Score:5, Insightful)
You raise excellent points. For many enterprise and server applications, performance is an issue. But I never said one should care nothing abut performance, only that in many applications the cost of the coder also impacts financial results.
For the price of one software engineer for a year (call it 50k to 100k burdened labor rate), I can buy between 20 to 100 new PCs (at $1000 to $3000 each). If the programmer is more expensive or the machines are less expensive, then the issue is even more in favor of worring about coder performance.
The trade-off between the hardware cost of the code and the wetware cost is not obvious in every case. A small firm that can double its server capacity for less than the price of a coder. or the creators of an infrequently-used application may not need high performance. On the other hand, a large software seller that sells core performance apps might worry more about speed. My only point is that ignoring the cost of the coder is wrong.
These different languages create a choice of whether to throw more hardware at a problem or throw more coders at the problem.
Re:They should benchmark development time (Score:3, Insightful)
Do you base this assertion to actual experience, or just a hunch that "it surely must be so"? If both languages are used to solve the same problem, the Python program is much more concise. It's not physically possible to create the Java program as quickly, given the same typing speed. Not to mention the difference in semantic complexity, which determines how fast you can churn out that code (assuming nonzero brain latency).
I guess people who have never tried dynamic typing can't comprehend how much faster development can be using it.
Not a fair test - Frame Pointers (Score:4, Insightful)
Re:trig calls in gcc (Score:4, Insightful)
Consider the logic... (Score:4, Insightful)
Guido van Rossum noted in an interview [artima.com] the following statistic, and I think it bears considerably on appropriateness:
So then, unless you quantify the types of apps you build, the team you use, and the results that are expected, my experience has shown me that most of the time, for business apps, it's overkill. Now, if you're in a dev team at a software company, well then, I could consider the other side.
Coming in 1.5, but you can do this now (Score:5, Insightful)
Reminds me of... (Score:2, Insightful)
Reminds me of my 6th grade 'science fair' project.
I took a couple different compilers, languages, did some loops and math and such, timed them all.
"Which computer language is the fastest"
About half way through the project I realized how big of a waste of time it was.
What kinds of things should you be testing?
Speeds of function calls???
Implement various sorting algorithims?
Audio/Video compression/decompression?
When it comes down to it, it's all the same math, and any good compiler is going to come close to making the same darn code.
By now, we all know that you use one language for one thing, and another language for another. For various reasons.
A hammer is your only tool if all your problems are nails, isn't the cliche?
Re:What about coder's performance? (Score:2, Insightful)
I did my own productivity benchmarks between C++ and Smalltalk in 1996. I consider myself very adept at both languages. At the time I was coding C++ CORBA internals that had to function across 10 platforms (gawd, what a pain). I was also involved in a Smalltalk ORB project.
My productivy benchmark was completing foundation frameworks for a financial trading package. The time required to complete equal functionality in C++ was 10x the amount it took in Smalltalk. I found it agreed with similar claims in language productivity.
It's important to consider the context in which the program will operate. This drives the requirements a solution will need to fulfill, and in turn, may influence the choice of environment, frameworks, libraries, and language.
There are cases where the speed of delivering accurate, new functionality is paramount. In these cases, I wouldn't want to be using C++.
Re:Trig functions... (Score:4, Insightful)
Elsewhere it sucks. MacOS, GTK, photon, Motif. Even porrly writeen swing programs outperform on those platforms.
But back to your FUD. Yes, bad programmers make ugly and poor performing GUI code. Swing is no different in that regard. But have you looked at recent swing programs in the 1.4.2 version of the JDK? Tried stuff like CleverCactus (a mail client)? Synced your MP3s on a new RIO? Used Yahoo's Site Builder to make a web site? There are excelent swing progams [sun.com] out there. Many you probobly don't realize are java swing apps!
But since SWT is only in early adopter land we haven't seen the real dogs of GUIs it can make yet, especially since you have to do such arcane and ancient tasks in SWT as managing your own event queue!
.NET benchmark flawed. It is faster. (Score:3, Insightful)
double benchmark(int number_of_iterations);
void main (void)
{
Time start,end;
double outcome;
benchmark(1);
for(int i = 1; i < 11; ++i)
{
start = CurrentTime();
outcome = benchmark(i*1000000);
end = CurrentTime();
lprt (i,outcome,end-start);
}
}
double benchmark (int number_of_iterations)
{
double s,t;
s = 0.0;
t = 1.0;
for(int i = 1; i < number_of_iterations; ++i)
{
s += 1.0/t;
t += 1.0;
}
return (s);
}
As you can see above, I run the benchmark function once with counter of 1 and ignore its outcome before starting to measure time. The key is to allow compiler to compile the benchmarking function before running actual benchmark. Once it is done, I run then the benchmark 10 times for succesively larger counter from 1 billion to 10 billion and print number of iterations (in billions), the accuarcy and the time it takes to run. The idea here is that under the assumption that the benchmark time is related to number of iterations as a linear function I can easily find linear best fit function between number of cycles and run time in the form of
time = a * number_of_cycles + b
and then use value of a as a measurement of the benchmark. The value of b is good check, how the benchmark behaves. If it is large, then something went wrong. In my case it was always close to zero. I'm now away from my home computer and I don't have all the compilers, that were tested in this article, so I can't repeat those benchmarks modified to this method at the moment, but you guys might try to do it yourself.
Some people might challenge this by stating that the compile time for
Best regards.
Delphi & Kilyx (Score:3, Insightful)
Reader/Writer classes in java benchmark affect res (Score:2, Insightful)
Curiously, without the -server option, this resulting in a 78% performance HIT.
- Marty
Re:Trig functions... (Score:5, Insightful)
Faster processors should enable us to achieve more, not achieve the same old stuff much less efficiently.
wow, talk about a lame benchmark (Score:3, Insightful)
I mean seriously, they do math on all the ints from one to one billion. Why even bother? Adding large 32 bit ints takes exactly the same amount of time as adding small ones (but I guess you save one variable by doing math with the counter. Or one extra line of code saved)
I'm sorry, but this is the most pointless compiler benchmark ever.
A good language comparison would be to have a bunch of groups of people try to code up the best implementation they could in whatever language, of some complex problem, and use that as the baseline.
Re:Trig functions... (Score:3, Insightful)
Maybe, but most *users* will chose a SWT app over a Swing one anytime. Actually most users will refuse to use a Swing app, they feel strange and look ugly.
The main reason for Eclipse's success (and the demise of the other free IDEs) is that only Eclipse offers a pleasant GUI, which no Swing-based IDE can.
Re:Trig functions... (Score:2, Insightful)
the reason for the slowdown between 1.3 and 1.4 is that 1.3 introduced a new class called StrictMath that provided better cross patform consistency the standard Math class. It was slower though. In 1.4 the standard Math class was rewritten so that internally it uses the StrictMath class.
Other implementaions do not have to use this approach.
matfud
Re:Trig functions... (Score:3, Insightful)
Re:If speed isn't an issue (Score:3, Insightful)
Unofficially standard
Does it have pointers or not? (Yes it does - and yes they are restricted - and the issues that that cuases is not overly severe).
It has references, which aren't the same thing. C++ has references (e.g. char&) and pointers (e.g. char*). And yes, when I say it makes "pointers safer" I mean referencing. Whatever
Java is a FAR, FAR, FAR, beter ___LANGUAGE___ than C++ - ignore the Vm, the libraries, etc... and look at the language - Java is way better.
I have. I've used Java and C++ in two rather large projects. And I really can't see why Java is that much better. Or, indeed, that much different.
I don't have to worry about pointers going astray (but then again, I don't have pointers full stop), or garbage collection, but apart from that, what's the difference? Ignoring the libraries, that the only thing I can think of that's different. Well, except that C++ has multiple inheritence, and so forth.
Perhaps I've missed something. Could you explain why it is "way better"? Remember to ignore the Vm, the libraries and just to focus on the language.
C++ would have been stillborn if it was not C-like - Java would have been if it was not C++ like.
Agreed. That doesn't make Java a good language, though; that's just a reason why it's bad.
That said, I've never really understood that argument. I mean, how long does it take to learn a new language? A few hours for an experienced programmer, really. Making Java C++ may have saved a day or so of programming time on one programming project, but that's nothing compared to how long software takes to develop. On the other hand, companies rarely act in a logical fashion when it comes to software.
Re:They should benchmark development time (Score:2, Insightful)
Sorry, no it doesn't. You can't overload functions. You can't use true polymorphism. You can't enforce inheritance. You can't code to an interface, or abstract class. You can't just look at a function definition and know what type of parameters it contains. You can't detect type-based errors until runtime. These are all the things that will slow you down in six months, and beyond. I've been there, and thats why I will never go back to a dynamically typed language for anything over 2,000 lines.