Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

The D Programming Language, Version 1.0 570

penguinblotter writes in a journal article: "Soon, Walter Bright is scheduled to release version 1.0 of the D Programming Language. D is a systems programming language. Its focus is on combining the power and high performance of C and C++ with the programmer productivity of modern languages like Ruby and Python. Special attention is given to the needs of quality assurance, documentation, management, portability and reliability. D has appeared on Slashdot a few times before, and Walter has continued to add more and more features. Most Slashdot community comments in these articles have been offered on feature X or spec Y without reading through the extensive D newsgroup archives. It has been here over the past seven years where extremely gifted and experienced programmers hashed out discussions and arrived at excellent implementations of all the ideas discussed." Read on for the rest of penguinblotter's writeup.

For those with a C/C++ background, D offers:
  • native code speed
  • extremely fast compilation times
  • garbage collection (although you can manage your own memory if you want)
  • OOP - by reference only, easy initialization, always virtual
  • cleaner template metaprogramming syntax, more powerful templates, as well
  • built-in dynamic and associative arrays, array slicing
  • versioning (no preprocessor madness)
  • link-compatibility with C
  • nested functions
  • class delegates / function pointers
  • module system
For those with a C#/Java background (a shorter list, but one with big wins):
  • similar syntax
  • No virtual machine or interpreter
  • built-in unit testing and design-by-contract
These two comparison sheets can go into more depth on how D stacks up against other languages.

From D's creator:
For me, it's hard to pinpoint any particular feature or two. It's the combination of features that makes the cake, not the sugar, flour or baking powder. So,
  1. My programs come together faster and have fewer bugs.
  2. Once written, the programs are easier to modify.
  3. I can do (1) and (2) without giving up performance.
Get your compilers and start hacking D!
  • DMD (Digital Mars reference compiler, Windows & Linux, x86)
  • GDC (GCC front-end)
This discussion has been archived. No new comments can be posted.

The D Programming Language, Version 1.0

Comments Filter:
  • by FishWithAHammer ( 957772 ) on Monday January 01, 2007 @05:06PM (#17424800)
    I'm looking at using it via GDC for my next project. For people who use C/C++ regularly, this is something you ought to look into.

    It's not a toy language. If you're a C++ programmer, you'll be almost immediately functional in the language. And you can call C and C++ libraries seamlessly. It's pretty sweet.
  • by Nutty_Irishman ( 729030 ) on Monday January 01, 2007 @05:24PM (#17424956)
    All other considerations aside, runtime speed really should be a justification as a bonus to Java. Java isn't that much slower if you actually take the time to compile it to native code first. Using something like a JIT compiler http://en.wikipedia.org/wiki/Just-in-time_compilat ion [wikipedia.org] can greatly increase the speed of your code and put it close in line with C++. I would certainly consider D if both 1 and 2 were better than Java.
  • by matrixise ( 643691 ) on Monday January 01, 2007 @05:30PM (#17425016)
    from http://digitalmars.com/d/interfaceToC.html [digitalmars.com]

    D does not provide an interface to C++. Since D, however, interfaces directly to C, it can interface directly to C++ code if it is declared as having C linkage.

    D class objects are incompatible with C++ class objects.
  • Re:Weird writeup: (Score:3, Informative)

    by 91degrees ( 207121 ) on Monday January 01, 2007 @05:33PM (#17425034) Journal
    Couple of points:

    * native code speed

    I think this is a response to criticisms from C programmers about most modern languages, rather than a benefit over C.

    Not exactly a recommendation that the core language apparently is so weak that these can't be put into libraries.

    Some of this is useful enough to be built in. stl and the like are pretty handy but sometimes it feels a bit of a kludge. Plus, built-in allows better optimistations for specific cases.

    Obviously both C and C++ have function pointers.

    Yes, and the syntax is horrible. D makes this a lot nicer.
  • by Brandybuck ( 704397 ) on Monday January 01, 2007 @05:36PM (#17425072) Homepage Journal
    And you can call C and C++ libraries seamlessly.

    Really? Not according to their FAQ. C yes. C++ no. Otherwise I would be in the process of switching over as we speak.
  • by TheGavster ( 774657 ) on Monday January 01, 2007 @05:44PM (#17425154) Homepage
    What, exactly, is the benefit of the .Net VM? There is only one full implementation of .Net (the MS one), and it runs on a single platform (Windows on x86). You might as well build native x86 code linked against Windows libraries for all the portability you have. And even if you're going to bother implementing the VM across a bunch of platforms, why not implement a standard library across a bunch of platforms and link native executables against that?
  • Currently learning D (Score:5, Informative)

    by kihjin ( 866070 ) on Monday January 01, 2007 @05:49PM (#17425188)
    Note: I've been programming in C/C++ for four years.

    I took it upon myself to learn D not more than a few weeks ago. A classmate introduced me to the language last spring.

    While I'm still learning D, it has some notable features:

    • auto keyword for inferred type declaration
    • lazy keyword for evaluation
    • delegates are like function pointers, but cooler. Literal statements can be passed as variables, and aren't evaluated until the delegate is called.
    • scope(exit|failure|success), specify a block of exit code
    • in/out/inout function keywords, offer readable code for determining what a parameter in a function is designed for.
    • get/set methods automatically become a property (accessed like a public variable)
    • foreach, foreach_reverse, container iteration
    • with statement, C++'s using on a object-level

    Of course one may argue that none of this is necessary and could be made independent of the language itself. My belief is that would increase the complexity of coding in D.

    If you're interested in D you should visit http://www.dsource.org/ [dsource.org]. There are some interesting projects such as Derelict [dsource.org] (collection of C game bindings) and Bud [dsource.org] (make and SCons replacement).
  • by Decaff ( 42676 ) on Monday January 01, 2007 @05:54PM (#17425230)
    Java isn't that much slower if you actually take the time to compile it to native code first. Using something like a JIT compiler http://en.wikipedia.org/wiki/Just-in-time_compilat [wikipedia.org] ion can greatly increase the speed of your code and put it close in line with C++.

    This is a bit of an old myth. Almost all Java is run as native code these days, even on VMs, and is mostly pretty close to C++ speed. Benchmarks that show Java as significantly slower than C++ usually result from not allowing the VM enough time to perform native code translation of time-critical code. Java has moved away from JIT compilation (as against the later optimisation of HotSpot) because it led to long start-up times - you had to wait for code to be compiled to native before it ran. Now Java usually starts up as interpreted, with the translation to native code happening later on, in the background.

    Where C, C++ and D win out over Java in terms of performance is when you need programs that have to start up fast, run fast, but only for short periods (a few seconds).
  • Lazy Questions (Score:3, Informative)

    by Quantam ( 870027 ) on Monday January 01, 2007 @06:15PM (#17425468) Homepage
    Looking at the comparison lists, D looks pretty nice. It has a lot of features that I'd consider switching languages for (from C++), but any such language would have to have a few particular properties (due to the kinds of things I program):
    1. Must be able to disable garbage collection and manage allocation explicitly
    2. Must be able to allocate classes on the stack
    3. Must minimize use of exceptions in the standard library (in other words, exceptions must only be used for exceptional cases)

    Java fails all of them, if I recall correctly (I don't know that much about Java, actually). C# fails 2 and 3. It looks like you can disable garbage collection in D, but in the comparison list I didn't see mention of 2 or 3. Does anybody know, off the top of their head?
  • by Nasarius ( 593729 ) on Monday January 01, 2007 @06:26PM (#17425564)
    garbage collection ... No virtual machine How do they square that particular circle?
    It's really not that difficult. Hans Boehm wrote a garbage collector [hp.com] for C/C++ years ago, which happens to be the same one that the Digital Mars implementation of D uses.
  • by Anonymous Brave Guy ( 457657 ) on Monday January 01, 2007 @06:30PM (#17425606)

    garbage collection ... No virtual machine ... How do they square that particular circle?

    The same way as countless other programming languages have in the past, I imagine. Why do you think garbage collection requires running your code under a VM?

    Just In Time Compilation in C# or Java has "Native code speed", in fact it goes one better - since the compilation happens at a later time, more processor or other specific optimisations can be made.

    Of course, you're overlooking all the overhead of monitoring the code long enough to determine which on-the-fly optimisations are worth performing, and of compiling the code itself, neither of which is trivial.

    GC has a lot to do with the perceived slowness.

    True, though of course it's not without overheads. Almost all of the Big Claims(TM) made by GC advocates in these discussions come with a catch: state-of-the-art GC method number 17 has a lower amortised cost of memory recovery than explicitly freeing it C-style!*

    * But only if your system contains 10x as much memory as the program will ever need anyway.

    This is traditionally followed by a wisecrack about how memory is cheap, followed by three enlightened posters pointing out the stupidity of that argument for multiple reasons. :-)

    Isn't it disingenuous to tout both "native code speed" and "garbage collection"?

    That depends a lot on context. If you really have a system where the overheads of GC are trivial but all the advantages are present, it seems a fair claim. It's just not likely to be universally true, and representing it as such would indeed be disingenuous.

  • by scoonbutt ( 1022589 ) on Monday January 01, 2007 @06:37PM (#17425668)
    Garbage collection has no requirement for using a virtual machine. They usually show up together, but there's no technical requirement.
  • by Heembo ( 916647 ) on Monday January 01, 2007 @06:41PM (#17425716) Journal
    In .NET it's called a Common Runtime Library, running MSLI code. (That sentence is analogous to "the Java Virtual Machine runs Bytecode"

    The big win in .NET is that there is built-in administrator controllable security (even pub/priv key security) between the CRL (or vm to you) and the internal .NET framework. In fact, there are several administrative controllable hooks built into the .NET framework that we just do not see in Java and Ruby and the others. This is the feature that separates .NET from the rest, and the rest of these frameworks are working to catch up. All modern languages are or should be moving into this direction. I predict that in 5 years the ONLY way you will be able to code to the WinOS is via the secure API that is .NET. (Assuming your programmers and admin teams understand .NET very well!)

    .NET is horrible at scaling (less you got a big hardware budget), so I see .NET all over the DoD and internal sites, but not so much for full-in internet sites where Java is winning in the top 10 (example, MySpace is a Java app).
  • by TERdON ( 862570 ) on Monday January 01, 2007 @06:46PM (#17425762) Homepage
    Now, after F do we get G or 10?


    Either it'll be called 10, or H. G, has already been taken, not only once [wikipedia.org], but twice [wikipedia.org].

    For your reference (kudos goes to Wikipedia [wikipedia.org]), the following single letter (sometimes including some additional nonalphabetic characters) have also been implemented:

    A+ [wikipedia.org] A++ [wikipedia.org] B [wikipedia.org] C [wikipedia.org] C-- [wikipedia.org] C++ [wikipedia.org] C# [wikipedia.org] D [wikipedia.org] E [wikipedia.org] F [wikipedia.org] F# [wikipedia.org] G (now known as Deesel) [wikipedia.org] G [wikipedia.org] J [wikipedia.org] J# [wikipedia.org] J++ [wikipedia.org] K [wikipedia.org] L [wikipedia.org] M4 [wikipedia.org] Q [wikipedia.org] R [wikipedia.org] S [wikipedia.org] S2 [wikipedia.org] T [wikipedia.org] X10 [wikipedia.org]

    So - that only leaves you the letters H, I, N, O, P (sic!), U, V, W, Y and Z if you don't want to have a name clash with another programming language. Technically, M and X are followed by numbers in the previous examples, so you could argue for them as well, and even A (as it has a plus behind the letter)

    I'm mostly surprised that noone has thought of a (P)rogramming language. :)
  • by Zerathdune ( 912589 ) on Monday January 01, 2007 @07:14PM (#17426050) Journal
    actually, B was a modification of BCPL, not A (to my knowledge there was never a language called A.) BCPL was an enhancement of CPL.
  • by TheRaven64 ( 641858 ) on Monday January 01, 2007 @07:22PM (#17426144) Journal
    There is no such thing as an interpreted language. There are languages, and there are interpreters and compilers. Lisp, for example, can be both interpreted and compiled (Scheme is usually interpreted and Common Lisp is usually compiled, but there are exceptions). Tcc can both compile and interpret C code. Java can be compiled by something like gcj or interpreted by a JVM. If you compile Java you lose some of the features; the JVM bytecode format is designed so that it is easy for automated tools to reason about. At load-time, the JVM will parse the class files and check that they do not violate the Java security model; this is theoretically possible with compiled code, but much harder.

    The Squeak runtime for Smalltalk is written in Smalltalk. There is a smallish subset of Smalltalk used to write the basic functionality, which is compiled to native code. This then supports the whole language. The same model is, I believe, used for JNode, an operating system written in Java...

  • by LunarCrisis ( 966179 ) on Monday January 01, 2007 @07:41PM (#17426336)
    Though I agree with you for the most part (finding the nth character in a UTF-8 string is unnecessarily long), this is wrong:

    searching for a character just requires you to compare each 32-bit value to the target, without having to check it isn't a special character that is the first in an escape sequence
    UTF-8 was designed so that no complete set of bytes representing a character occurs as a substring of any other. This makes the search problem into a simple search for a string inside another string. The searching routine doesn't even need to know whether or not it's a UTF-8 string or not, just as long as it doesn't mangle the last bit.
  • by aegl ( 1041528 ) on Monday January 01, 2007 @07:54PM (#17426486)
    BCPL was one of the sources of inspiration for the programming language 'B', and its successor 'C'. Next in the series ought to be 'P'.
  • by Mr.Radar ( 764753 ) on Monday January 01, 2007 @08:02PM (#17426572)
    The compiler generates header files from source files with the -H option. It can decide what code is necessary and what code isn't.

    As for mixins, you can get the full scoop and some simple examples in the language spec, specifically the portion on Mixins [digitalmars.com].
  • by vyvepe ( 809573 ) on Monday January 01, 2007 @08:16PM (#17426696)
    How many OS'es are written in Python? Neither Java, ruby, perl nor python attempt be appropriate languages for writing OS'es. This doesn't make them good or bad. Other factors might.

    It might be possible to write OS in those loanguages. MS is trying to do it in C#. The project name is Singularity [microsoft.com]. But I agree that OS in not the target domain :)

  • by tlambert ( 566799 ) on Monday January 01, 2007 @09:09PM (#17427274)
    The primary reason for stack class allocation is that you may need to do your instancing at interrupt time or in a trap handler.

    Consider the case where you have a memory shortage (but interrupt stacks are preallocated, per interrupt), and therefore you cannot do an allocation, but want to run the current operation to completion.

    Consider also the case where you might be using a zone allocator, and you cannot expand the zone of zones, because in order to do so, you' need to handle a trap at ring 0 in a trap handler (i.e. a user page fault followed by a kernel page fault).

    The reason that these things wuld not be allocable at interrupt/trap time is that the allocations may block, and, if they do so, you could effectively end up blocked with interrupts disabled and no way to get back from it.

    The alterantive to this is that you would have to fail the request and back out your state all the way back (fail gracefully out). The problem with that approach is that now you have to put error checking around all of your function call graph hat could eventually result in potentially failing allocations, and deal with the performance degradation that might result (speculative execution on a PPC would make this effectively free, but on an x86, you would pay a fairly seiors penalty).

    There are three approaches commonly used in handling memory stavation situations:

    (1) Block until the memory is available

    (2) Fail the allocation request, and be prepared to deal with the failure, and then hope that by backing off, you don't lose unrecoverable state (e.g. if I read a hardware register, that might signal something to the hardware that would preclude me restarting the operation - for example, the AMD Lance Ethernet hardware), and that it's possible to redrive the operation

    (3) Djikstra's Banker's Algorithm: allocate all resources that you might need up front, before attempting the operation (this is typically what's done with, e.g. the ring buffers associate with ethernet devices, rather than allocating mbufs at interrupt level).

    The instancing of classes on the stack falls under a variant of #3: because you already have the stack preallocated, you are guaranteed that, so long as you do not exceed your stack depth, instancing the objects you need to instance for the lifetime of the transaction you are about to perform will always be successful.

    I'm not sure if that's the argument the poster to whom you were replying was referring, but I hope that clarifies things for you.

    -- Terry
  • by eluusive ( 642298 ) on Monday January 01, 2007 @09:33PM (#17427510)
    Please see: http://dsource.org/projects/bcd [dsource.org]
  • by hao2lian ( 726435 ) on Monday January 01, 2007 @09:37PM (#17427532) Homepage
    Shoehorning his adjectives doesn't change the facts: .NET is damn fast. Perhaps not "I need to raytrace downtown Manhattan." fast, but certainly fast for web services, desktop applications, mobile apps, and Windows PowerShell. Heck, it even beat out a C++ app where low-level usually succeeds--lifting big data structures--until Raymond Chen wrote his own allocator.

    http://blogs.msdn.com/ricom/archive/2005/05/19/420 158.aspx [msdn.com]
  • by WalterBright ( 772667 ) on Monday January 01, 2007 @10:03PM (#17427750) Homepage
    I have a problem with GC in a systems language... specifically, using GC means that your functions will not necessarily run in bounded time.

    malloc (and friends) don't run in bounded time, either.

    For a lot of uses, particularly in user space, this is not a problem, but if you were to kick of GC in an interrupt handler or trap handler, or a number of other places, this would make it impossible for you to implement code that was guaranteed to take at most a maximum number of CPU cycles.

    You cannot use malloc or new in those circumstances either. The correct way to do it is to preallocate all data needed for the interrupt service routine or real time critical section.

    The upshot of this is that so long as it's possible for someone to write a driver that ends up running in your kernel, and which depends on GC functionality to not leak memory, it will be impossible for an OS written in that language to support hard real time.

    Hard real time programming uses preallocated or static allocated data, not malloc or new (or GC).

    I have to say that GC is marginally useful for systems work only if you can run it on a system that doesn't need GC -- so that you can get a read out of where and how you are leaking memory, fix the problem, and then disable GC before you ship. In other words, it's a great diagnostic, but only if you can run both GC and non-GC at the same time, and only if you explicitly scope your allocations (i.e. act like you are not running in a GC'e lanuage in the first place).

    I used to think that, too, until I was forced into working with a GC. I've changed my mind.

    In other words, the intent of GC is to make programmers not have to know where their scope boundaries are, and you _must_ know this for systems programming tasks. So it doesn't deliver on its promise in a systems context, though it could be a helpful diagnostic for developers.

    All I can suggest is try using a GC for a project. My jawboning won't change your mind, but experience might.

  • by WalterBright ( 772667 ) on Monday January 01, 2007 @11:27PM (#17428354) Homepage
    Hans Boehm wrote a garbage collector for C/C++ years ago, which happens to be the same one that the Digital Mars implementation of D uses.

    While Hans Boehm has written an excellent GC, it has no relationship with D's GC. The complete source to D's GC (which is written 100% in D) comes with D, and you can check it out for yourself.

  • by Anonymous Coward on Monday January 01, 2007 @11:52PM (#17428534)
    While D strings are mostly implemented as character arrays, it works quite differently than C. Here are some notable differences:

    - D arrays are bounds checked. No accidental buffer overflows here.
    - D arrays are dynamic, you can resize them and concatenate them together.
    - D strings are D arrays, so they get the above bonuses.
    - D has distinct 'char', 'byte', and 'ubyte' types. char[] != ubyte[]. When you use foreach to iterate over a char[]/string, it will expand each codepoint (or whatever they are called) to a dchar (which is a 32 bit character) for you. ubyte and byte are used for plain-old-data, instead of the unfortunate C char.
    - Garbage collection frees you from worrying about where the strings go. No accidental memory leaks here.

    There is also a nice alternative to the plain old strings called dstring, which gives you even more benefits of d's arrays like indexing and slicing (you can safely leave foreach alone with it). http://www.dprogramming.com/dstring.php [dprogramming.com]

    I've used both D strings and C strings, and D's strings just felt so much better.
  • by donglekey ( 124433 ) on Tuesday January 02, 2007 @01:16AM (#17429028) Homepage
    D is garbage collected, has no vm, enables access to assembly language, access to direct memory management, and has trivial access to C libraries.

    Those techniques are definitely good if they work for what you are doing, and believe me I have wanted them to work for me, but the reality is that D enables things that those approaches don't have while retaining the ability to work and not worry about the language getting in your way.
  • by JamesNK ( 967097 ) on Tuesday January 02, 2007 @03:42AM (#17429644) Homepage
    Actually MySpace runs on .NET - Handling 1.5 Billion Page Views Per Day Using ASP.NET 2.0 [asp.net]

    Apparently the move to ASP.NET went quite well with CPU usage dropping from 85% to 27% according to that post.
  • by The_Dougster ( 308194 ) on Tuesday January 02, 2007 @04:44AM (#17429854) Homepage

    I've been messing around for a couple hours now trying to compile gdc against gcc-4.0.3 in Gentoo amd64 and it's just not happening. I ran into an issue where it had a int and size_t mismatch, an undefined cpu symbols macro, and after hacking these the build died complaining it thought that I was cross-compiling gcc.

    I've given up for now. Maybe if D hits the 1.0 magic number somebody will fix it for 64-bit systems and add it to portage. Oh well, I would have liked to start playing with D but I guess I'll just have to wait.

  • Stack allocation (Score:3, Informative)

    by igomaniac ( 409731 ) on Tuesday January 02, 2007 @05:44AM (#17430032)
    Deciding how to allocate an object you create would better be left to the compiler. It's not a hard analysis to do to make sure the reference to an object never escapes a particular (possibly recursive) function call, and in that case the object can be allocated on the stack. Leaving this decision to the compiler makes sure that if someone changes the code you're calling so it suddenly starts to keep references to the objects that you've allocated on the stack and are passing in, it won't break. In big C/C++ programs, this kind of error is quite common since it takes a lot of time to track down all callers to a function you're modifying and understand their allocation patterns. Tracking down errors like this can be extremely time consuming and programming languages that allow these errors are in my opinion wasting valuable programmer time that could be spent optimising code instead.
  • by Creepy ( 93888 ) on Tuesday January 02, 2007 @11:48AM (#17431888) Journal
    wow - you missed a couple of C dialects, most certainly because they extended the letter name with a word

    Objective-C [wikipedia.org] and the best programming language in existence, C-Intercal [wikipedia.org] (yeah, yeah - you whitespace [dur.ac.uk] lovers can bite me).

    I can't believe you don't know this - it's common knowledge that the letter 'P' was skipped because back in the early 80s Wordstar would use control-P to purge your document with no confirmation screen as opposed to Wordperfect's print, so there was an extreme hatred for the letter from people that used Wordstar at work or school and Wordperfect at home (practically everyone not using Cut-N-Paste on an Apple ][, which was, pretty much everyone). It was such a powerful effect that it practically destroyed the Pascal programming language and its .p extensions and nearly killed the Macintosh, which had standardized on Pascal for its operating system. The stigma of the letter has faded, but many old timers would never use a programming language called P.

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...