The D Programming Language, Version 1.0 570
penguinblotter writes in a journal article: "Soon, Walter Bright is scheduled to release version 1.0 of the D Programming Language. D is a systems programming language. Its focus is on combining the power and high performance of C and C++ with the programmer productivity of modern languages like Ruby and Python. Special attention is given to the needs of quality assurance, documentation, management, portability and reliability. D has appeared on Slashdot a few times before, and Walter has continued to add more and more features. Most Slashdot community comments in these articles have been offered on feature X or spec Y without reading through the extensive D newsgroup archives. It has been here over the past seven years where extremely gifted and experienced programmers hashed out discussions and arrived at excellent implementations of all the ideas discussed." Read on for the rest of penguinblotter's writeup.
For those with a C/C++ background, D offers:
- native code speed
- extremely fast compilation times
- garbage collection (although you can manage your own memory if you want)
- OOP - by reference only, easy initialization, always virtual
- cleaner template metaprogramming syntax, more powerful templates, as well
- built-in dynamic and associative arrays, array slicing
- versioning (no preprocessor madness)
- link-compatibility with C
- nested functions
- class delegates / function pointers
- module system
- similar syntax
- No virtual machine or interpreter
- built-in unit testing and design-by-contract
From D's creator:
For me, it's hard to pinpoint any particular feature or two. It's the combination of features that makes the cake, not the sugar, flour or baking powder. So,
- My programs come together faster and have fewer bugs.
- Once written, the programs are easier to modify.
- I can do (1) and (2) without giving up performance.
D is surprisingly good. (Score:3, Informative)
It's not a toy language. If you're a C++ programmer, you'll be almost immediately functional in the language. And you can call C and C++ libraries seamlessly. It's pretty sweet.
java native code compilation (Score:3, Informative)
Re:D is surprisingly good. (Score:5, Informative)
D does not provide an interface to C++. Since D, however, interfaces directly to C, it can interface directly to C++ code if it is declared as having C linkage.
D class objects are incompatible with C++ class objects.
Re:Weird writeup: (Score:3, Informative)
* native code speed
I think this is a response to criticisms from C programmers about most modern languages, rather than a benefit over C.
Not exactly a recommendation that the core language apparently is so weak that these can't be put into libraries.
Some of this is useful enough to be built in. stl and the like are pretty handy but sometimes it feels a bit of a kludge. Plus, built-in allows better optimistations for specific cases.
Obviously both C and C++ have function pointers.
Yes, and the syntax is horrible. D makes this a lot nicer.
Re:D is surprisingly good. (Score:3, Informative)
Really? Not according to their FAQ. C yes. C++ no. Otherwise I would be in the process of switching over as we speak.
Re:Erm how is this better.. (Score:3, Informative)
Currently learning D (Score:5, Informative)
I took it upon myself to learn D not more than a few weeks ago. A classmate introduced me to the language last spring.
While I'm still learning D, it has some notable features:
Of course one may argue that none of this is necessary and could be made independent of the language itself. My belief is that would increase the complexity of coding in D.
If you're interested in D you should visit http://www.dsource.org/ [dsource.org]. There are some interesting projects such as Derelict [dsource.org] (collection of C game bindings) and Bud [dsource.org] (make and SCons replacement).
Re:java native code compilation (Score:5, Informative)
This is a bit of an old myth. Almost all Java is run as native code these days, even on VMs, and is mostly pretty close to C++ speed. Benchmarks that show Java as significantly slower than C++ usually result from not allowing the VM enough time to perform native code translation of time-critical code. Java has moved away from JIT compilation (as against the later optimisation of HotSpot) because it led to long start-up times - you had to wait for code to be compiled to native before it ran. Now Java usually starts up as interpreted, with the translation to native code happening later on, in the background.
Where C, C++ and D win out over Java in terms of performance is when you need programs that have to start up fast, run fast, but only for short periods (a few seconds).
Lazy Questions (Score:3, Informative)
1. Must be able to disable garbage collection and manage allocation explicitly
2. Must be able to allocate classes on the stack
3. Must minimize use of exceptions in the standard library (in other words, exceptions must only be used for exceptional cases)
Java fails all of them, if I recall correctly (I don't know that much about Java, actually). C# fails 2 and 3. It looks like you can disable garbage collection in D, but in the comparison list I didn't see mention of 2 or 3. Does anybody know, off the top of their head?
Re:GC, No Vm or performance hit (Score:4, Informative)
Re:GC, No Vm or performance hit (Score:5, Informative)
The same way as countless other programming languages have in the past, I imagine. Why do you think garbage collection requires running your code under a VM?
Of course, you're overlooking all the overhead of monitoring the code long enough to determine which on-the-fly optimisations are worth performing, and of compiling the code itself, neither of which is trivial.
True, though of course it's not without overheads. Almost all of the Big Claims(TM) made by GC advocates in these discussions come with a catch: state-of-the-art GC method number 17 has a lower amortised cost of memory recovery than explicitly freeing it C-style!*
* But only if your system contains 10x as much memory as the program will ever need anyway.
This is traditionally followed by a wisecrack about how memory is cheap, followed by three enlightened posters pointing out the stupidity of that argument for multiple reasons. :-)
That depends a lot on context. If you really have a system where the overheads of GC are trivial but all the advantages are present, it seems a fair claim. It's just not likely to be universally true, and representing it as such would indeed be disingenuous.
Re:GC, No Vm or performance hit (Score:2, Informative)
Re:Erm how is this better.. (Score:3, Informative)
The big win in
Re:This won't work... (Score:5, Informative)
Either it'll be called 10, or H. G, has already been taken, not only once [wikipedia.org], but twice [wikipedia.org].
For your reference (kudos goes to Wikipedia [wikipedia.org]), the following single letter (sometimes including some additional nonalphabetic characters) have also been implemented:
A+ [wikipedia.org] A++ [wikipedia.org] B [wikipedia.org] C [wikipedia.org] C-- [wikipedia.org] C++ [wikipedia.org] C# [wikipedia.org] D [wikipedia.org] E [wikipedia.org] F [wikipedia.org] F# [wikipedia.org] G (now known as Deesel) [wikipedia.org] G [wikipedia.org] J [wikipedia.org] J# [wikipedia.org] J++ [wikipedia.org] K [wikipedia.org] L [wikipedia.org] M4 [wikipedia.org] Q [wikipedia.org] R [wikipedia.org] S [wikipedia.org] S2 [wikipedia.org] T [wikipedia.org] X10 [wikipedia.org]
So - that only leaves you the letters H, I, N, O, P (sic!), U, V, W, Y and Z if you don't want to have a name clash with another programming language. Technically, M and X are followed by numbers in the previous examples, so you could argue for them as well, and even A (as it has a plus behind the letter)
I'm mostly surprised that noone has thought of a (P)rogramming language.
Re:This won't work... (Score:2, Informative)
Re:Because the ones we have suck? (Score:4, Informative)
The Squeak runtime for Smalltalk is written in Smalltalk. There is a smallish subset of Smalltalk used to write the basic functionality, which is compiled to native code. This then supports the whole language. The same model is, I believe, used for JNode, an operating system written in Java...
Re:This won't work... (Score:3, Informative)
Shouldn't it be called 'P'? (Score:2, Informative)
Re:Currently learning D (Score:2, Informative)
As for mixins, you can get the full scoop and some simple examples in the language spec, specifically the portion on Mixins [digitalmars.com].
should be possible to write os in python (Score:2, Informative)
It might be possible to write OS in those loanguages. MS is trying to do it in C#. The project name is Singularity [microsoft.com]. But I agree that OS in not the target domain :)
The primary reason for stack class allocation... (Score:2, Informative)
Consider the case where you have a memory shortage (but interrupt stacks are preallocated, per interrupt), and therefore you cannot do an allocation, but want to run the current operation to completion.
Consider also the case where you might be using a zone allocator, and you cannot expand the zone of zones, because in order to do so, you' need to handle a trap at ring 0 in a trap handler (i.e. a user page fault followed by a kernel page fault).
The reason that these things wuld not be allocable at interrupt/trap time is that the allocations may block, and, if they do so, you could effectively end up blocked with interrupts disabled and no way to get back from it.
The alterantive to this is that you would have to fail the request and back out your state all the way back (fail gracefully out). The problem with that approach is that now you have to put error checking around all of your function call graph hat could eventually result in potentially failing allocations, and deal with the performance degradation that might result (speculative execution on a PPC would make this effectively free, but on an x86, you would pay a fairly seiors penalty).
There are three approaches commonly used in handling memory stavation situations:
(1) Block until the memory is available
(2) Fail the allocation request, and be prepared to deal with the failure, and then hope that by backing off, you don't lose unrecoverable state (e.g. if I read a hardware register, that might signal something to the hardware that would preclude me restarting the operation - for example, the AMD Lance Ethernet hardware), and that it's possible to redrive the operation
(3) Djikstra's Banker's Algorithm: allocate all resources that you might need up front, before attempting the operation (this is typically what's done with, e.g. the ring buffers associate with ethernet devices, rather than allocating mbufs at interrupt level).
The instancing of classes on the stack falls under a variant of #3: because you already have the stack preallocated, you are guaranteed that, so long as you do not exceed your stack depth, instancing the objects you need to instance for the lifetime of the transaction you are about to perform will always be successful.
I'm not sure if that's the argument the poster to whom you were replying was referring, but I hope that clarifies things for you.
-- Terry
Re:D is surprisingly good. (Score:4, Informative)
Re:It's right there in your post (Score:2, Informative)
http://blogs.msdn.com/ricom/archive/2005/05/19/42
Re:I have a problem with GC in a systems language. (Score:2, Informative)
malloc (and friends) don't run in bounded time, either.
For a lot of uses, particularly in user space, this is not a problem, but if you were to kick of GC in an interrupt handler or trap handler, or a number of other places, this would make it impossible for you to implement code that was guaranteed to take at most a maximum number of CPU cycles.
You cannot use malloc or new in those circumstances either. The correct way to do it is to preallocate all data needed for the interrupt service routine or real time critical section.
The upshot of this is that so long as it's possible for someone to write a driver that ends up running in your kernel, and which depends on GC functionality to not leak memory, it will be impossible for an OS written in that language to support hard real time.
Hard real time programming uses preallocated or static allocated data, not malloc or new (or GC).
I have to say that GC is marginally useful for systems work only if you can run it on a system that doesn't need GC -- so that you can get a read out of where and how you are leaking memory, fix the problem, and then disable GC before you ship. In other words, it's a great diagnostic, but only if you can run both GC and non-GC at the same time, and only if you explicitly scope your allocations (i.e. act like you are not running in a GC'e lanuage in the first place).
I used to think that, too, until I was forced into working with a GC. I've changed my mind.
In other words, the intent of GC is to make programmers not have to know where their scope boundaries are, and you _must_ know this for systems programming tasks. So it doesn't deliver on its promise in a systems context, though it could be a helpful diagnostic for developers.
All I can suggest is try using a GC for a project. My jawboning won't change your mind, but experience might.
Re:GC, No Vm or performance hit (Score:2, Informative)
While Hans Boehm has written an excellent GC, it has no relationship with D's GC. The complete source to D's GC (which is written 100% in D) comes with D, and you can check it out for yourself.
Re:This won't work... D Strings (Score:3, Informative)
- D arrays are bounds checked. No accidental buffer overflows here.
- D arrays are dynamic, you can resize them and concatenate them together.
- D strings are D arrays, so they get the above bonuses.
- D has distinct 'char', 'byte', and 'ubyte' types. char[] != ubyte[]. When you use foreach to iterate over a char[]/string, it will expand each codepoint (or whatever they are called) to a dchar (which is a 32 bit character) for you. ubyte and byte are used for plain-old-data, instead of the unfortunate C char.
- Garbage collection frees you from worrying about where the strings go. No accidental memory leaks here.
There is also a nice alternative to the plain old strings called dstring, which gives you even more benefits of d's arrays like indexing and slicing (you can safely leave foreach alone with it). http://www.dprogramming.com/dstring.php [dprogramming.com]
I've used both D strings and C strings, and D's strings just felt so much better.
Re:Because the ones we have suck? (Score:3, Informative)
Those techniques are definitely good if they work for what you are doing, and believe me I have wanted them to work for me, but the reality is that D enables things that those approaches don't have while retaining the ability to work and not worry about the language getting in your way.
Re:Erm how is this better.. (Score:2, Informative)
Apparently the move to ASP.NET went quite well with CPU usage dropping from 85% to 27% according to that post.
GDC source is not 64-bit clean (Score:3, Informative)
I've been messing around for a couple hours now trying to compile gdc against gcc-4.0.3 in Gentoo amd64 and it's just not happening. I ran into an issue where it had a int and size_t mismatch, an undefined cpu symbols macro, and after hacking these the build died complaining it thought that I was cross-compiling gcc.
I've given up for now. Maybe if D hits the 1.0 magic number somebody will fix it for 64-bit systems and add it to portage. Oh well, I would have liked to start playing with D but I guess I'll just have to wait.
Stack allocation (Score:3, Informative)
Re:This won't work... (Score:3, Informative)
Objective-C [wikipedia.org] and the best programming language in existence, C-Intercal [wikipedia.org] (yeah, yeah - you whitespace [dur.ac.uk] lovers can bite me).
I can't believe you don't know this - it's common knowledge that the letter 'P' was skipped because back in the early 80s Wordstar would use control-P to purge your document with no confirmation screen as opposed to Wordperfect's print, so there was an extreme hatred for the letter from people that used Wordstar at work or school and Wordperfect at home (practically everyone not using Cut-N-Paste on an Apple ][, which was, pretty much everyone). It was such a powerful effect that it practically destroyed the Pascal programming language and its