Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

The D Programming Language, Version 1.0 570

penguinblotter writes in a journal article: "Soon, Walter Bright is scheduled to release version 1.0 of the D Programming Language. D is a systems programming language. Its focus is on combining the power and high performance of C and C++ with the programmer productivity of modern languages like Ruby and Python. Special attention is given to the needs of quality assurance, documentation, management, portability and reliability. D has appeared on Slashdot a few times before, and Walter has continued to add more and more features. Most Slashdot community comments in these articles have been offered on feature X or spec Y without reading through the extensive D newsgroup archives. It has been here over the past seven years where extremely gifted and experienced programmers hashed out discussions and arrived at excellent implementations of all the ideas discussed." Read on for the rest of penguinblotter's writeup.

For those with a C/C++ background, D offers:
  • native code speed
  • extremely fast compilation times
  • garbage collection (although you can manage your own memory if you want)
  • OOP - by reference only, easy initialization, always virtual
  • cleaner template metaprogramming syntax, more powerful templates, as well
  • built-in dynamic and associative arrays, array slicing
  • versioning (no preprocessor madness)
  • link-compatibility with C
  • nested functions
  • class delegates / function pointers
  • module system
For those with a C#/Java background (a shorter list, but one with big wins):
  • similar syntax
  • No virtual machine or interpreter
  • built-in unit testing and design-by-contract
These two comparison sheets can go into more depth on how D stacks up against other languages.

From D's creator:
For me, it's hard to pinpoint any particular feature or two. It's the combination of features that makes the cake, not the sugar, flour or baking powder. So,
  1. My programs come together faster and have fewer bugs.
  2. Once written, the programs are easier to modify.
  3. I can do (1) and (2) without giving up performance.
Get your compilers and start hacking D!
  • DMD (Digital Mars reference compiler, Windows & Linux, x86)
  • GDC (GCC front-end)
This discussion has been archived. No new comments can be posted.

The D Programming Language, Version 1.0

Comments Filter:
  • Re:Weird writeup: (Score:1, Interesting)

    by Anonymous Coward on Monday January 01, 2007 @06:11PM (#17425420)

    * nested functions

    Point.
    Why do so many people care about nested functions [wikipedia.org]? I really don't see the use except for people who want to write Pascal-like girly-man code. It isn't as if your stack doesn't treat non-nested functions as nested functions in some implementations (as in you'll return to the instruction one after where you started). Is this about people who want to share data between functions but are afraid to use classes, references, or pointers; people who don't want to share information between parts of a function; or people are afraid of the normal forms of polymorphism (all options depending upon the implementation of course)? The answer: use references, write your functions shorter, overload your functions, and stop crying like a baby.
  • Python and D (Score:4, Interesting)

    by MightyMooquack ( 241719 ) <kirklin.mcdonald ... Pcom minus berry> on Monday January 01, 2007 @06:40PM (#17425706)

    One area I see D being useful in is integration with Python. Writing to the raw Python/C API is cumbersome. (Managing reference counts is tedious.) Boost.Python is difficult to build and slow to compile. I've written a library for D called Pyd [dsource.org], whose purpose is not entirely unlike Boost.Python's.

    Pyd is easy to use. It provides its own extension to Python's distutils. Usually, you just need to make sure the D compiler is on your PATH, write a setup.py file, and run python setup.py build.

    "Hello world" in Pyd looks something like this (and I apologize for the lack of indentation):

    import pyd.pyd;
    import std.stdio;

    void hello_func() {
    writefln("Hello, world!");
    }

    extern (C) void PydMain() {
    def!(hello_func);
    module_init();
    }
  • by StrawberryFrog ( 67065 ) on Monday January 01, 2007 @07:01PM (#17425924) Homepage Journal
    If I read that FAQ right, it is possible that "integer or other random data be misinterpreted as a pointer by the collector" since given the nature of C - no VM, the difference between a pointer and an int is at best a gentleman's agreement - anything in memory *could* be a pointer. Well, I suppose it works if he says so. But it certainly isn't pretty.
  • by Anonymous Brave Guy ( 457657 ) on Monday January 01, 2007 @07:06PM (#17425988)

    Ironically, I consider most of the C++ to D examples there to be flaws in D.

    Supplying predefined comparison operators is all very well, but what if a class doesn't support the concept of equality? Alternatively, suppose it supports only equality and not ordering, or vice versa? How do I do that in a natural way, with a single comparison function to define?

    The whole concept of relying on scoped variables completely misses a major advantage of RAII, which is that in the common usage, you can't forget anything (a delete, finally or in this case scope) and inadvertently skip the destructor. Requiring some special keywords to get this behaviour is just horrible.

    The construction/initialisation semantics just seem a mess. You've either introduced some hideous inefficiency and semantic problems (everything is default-initialised and then reassigned afterwards in the constructor if necessary) or you've introduced a horrible loophole (constructors can start messing around with uninitialised data, for example by calling another member function, before the class invariants are properly set up). The latter is even worse than the analogous loophole in C++.

    This sort of thing is exactly my big beef with D, and the reason I doubt I will ever seriously consider it for a non-trivial project in its current form. It's going for style over substance, PR hype over effectiveness. It does away with a few controversial things in C++, but some of its underlying models are simply broken, and as I illustrated elsewhere, its improvements in other areas are far from state-of-the-art.

  • by TheRaven64 ( 641858 ) on Monday January 01, 2007 @07:12PM (#17426034) Journal
    UTF-32 does, indeed, do that. It is quite a good way of working internally. You can do things with UTF-32 much more efficiently than with UTF-16 or UTF-8 (e.g. searching for a character just requires you to compare each 32-bit value to the target, without having to check it isn't a special character that is the first in an escape sequence). Most modern processors come with a vector unit that can handle vectors of 32-bit integers, so if you have to handle large quantities of text you can speed certain things up even more by running streaming calculation on the vector unit.
  • by Anonymous Brave Guy ( 457657 ) on Monday January 01, 2007 @07:22PM (#17426146)

    Um, considering that the vast majority of the world's floating-point hardware is x86 and supports extended precision, saying that it lacks "true support from almost every mainstream architecture" is comical.

    Not really. Take a look at the performance, addressing modes, and so on. The support for 80-bit on Intel boxes is not the same as the support for 64-bit, at least in practical terms.

    Also, while the vast majority of the world's FP hardware may indeed run on x86 (I don't know), I would suggest that at least a very significant minority of the software that actually requires high-precision floating point work still runs on workstations which may well be based on other architectures.

    No-one wishes this were not so more than me, I promise you: I write high-performance, high-precision mathematical libraries for a living, and minor differences in behaviour across platforms or where precision has been lost are the bane of my working life.

  • by jyoull ( 512280 ) <jim@@@media...mit...edu> on Monday January 01, 2007 @07:33PM (#17426246)
    There is no such thing as an interpreted language. There are languages, and there are interpreters and compilers.


    Are you really well acquainted with gcj? I'm sorry, but I don't get how the end result or even the stuff going into it (and the required inputs, like making some explicit calls that would never be required in Java) can be called Java anymore.

    the point I didn't make well was that when a language has been designed to execute inside a containing environment (the JRE or whatever facsimile thereof) you can't just up and erase that... without emulating all the stuff that was supposed to be alive in that environment. Taking a look at the gcj's to-do list and all the stuff that isn't yet supported should be enough to show you not only that this is not a trivial task, but to suggest that perhaps it's not a useful task, and I say that knowing that people have worked very hard on the project.

    Maybe it makes sense for some resource-constrained settings like embedded systems, but there i've used Java straight up, satisfactorily. Granted, these are not life-critical systems I've built, but rather than compiling Java - or trying to - the better answer is to use a more appropriate language in those circumstances.
  • Re:Weird writeup: (Score:3, Interesting)

    by donglekey ( 124433 ) on Monday January 01, 2007 @09:15PM (#17427332) Homepage
    This strikes me as the thoughts of someone who hasn't given D a good look. I have been using it recently and it is phenomenal. It is a breath of fresh air. It is not that it has many of these features over C/C++, it is that it cleans out the enormous amounts of headache inducing things about these languages while retaining what Java/C# lose. Native code speeds. No VM. Trivial Integration with C.
  • by WalterBright ( 772667 ) on Monday January 01, 2007 @09:23PM (#17427404) Homepage
    In any case, D's claim to this feature is a bit odd, since every x86 C/C++ compiler worth its salt already compiles long double to extended precision.

    VC++ doesn't. Java doesn't. C# doesn't. Python doesn't. Ruby doesn't. 80 bit floating point is highly useful, and it's about time it was required for languages on FPUs that support it.

  • by Anonymous Coward on Monday January 01, 2007 @09:39PM (#17427554)
    IBM did, in a sense. "Programming Language One" was to be the end-all of the languages of the day (50-60s era). It's called PL/I; Multics was written in it (which inspired Unix, which inspired Minix, which inspired Linux, et al).
  • by rbarreira ( 836272 ) on Monday January 01, 2007 @09:51PM (#17427646) Homepage
    Here is something [hp.com] that I found about this. Not very good news for conservative garbage collection, I say...
  • by Jimithing DMB ( 29796 ) <dfe@tgwb[ ]rg ['d.o' in gap]> on Monday January 01, 2007 @10:00PM (#17427738) Homepage

    I noticed that a comparison to Objective-C is quite conspicuously absent from the list of languages compared to D. Why is it missing? Granted D seems to be a much greater change to C than Objective-C is but I can't help but thinking that one of the main attractions to D seems to be its heap-based garbage-collected object system. You can already get the object runtime with Objective-C. If you use GNU you can even have Boehm GC (which is apparently the GC that D uses). If you use Apple you will have to wait for Leopard to get GC. Another new Objective-C feature is the ability to use full C++ objects as instance variables in your Objective-C classes and do the right thing with initializing (calling the default no-argument constructor upon alloc).

    On top of that, Objective-C actually includes tons of reflection information. Although Objective-C has protocols which are roughly equivalent to Java/C# interfaces they are almost completely unnecessary. In Objective-C one can query at runtime whether a method is implemented or not and if so call it. So whereas in Java you'd do this:

    if(anObject instanceof MyInterface) ((MyInterface)anObject)->doSomething();

    in Objective-C you can do this:

    if([anObject respondsToSelector:@selector(doSomething)]) [anObject doSomething];

    The difference being that in the Java case you have to declare MyInterface as containing the one doSomething() method and inform java that your object extends MyInterface whereas in Objective-C you merely need to provide a doSomething method on your object.

    Basically that means that in Objective-C every single method effectively becomes an interface. You would not believe how useful this is once you realize it. Note that at runtime there is ZERO difference. In both the Java and Objective-C cases the object is being checked to see if it implements something. Same with C++ if you use dynamic_cast<>()

    Granted every language has its niche and I'm sure D will find its. Objective-C's niche is definitely GUI programming. The ample reflection information allows for easy implementations of archiving (serialization) and most importantly key-value coding and the related action methods pattern. It's a pretty damn cool thing when your RAD tool simply outputs archived objects that refer to methods to be called upon certain actions simply by name.

  • by bluefoxlucid ( 723572 ) on Monday January 01, 2007 @10:21PM (#17427910) Homepage Journal

    CPUs put a lot of stock in branch prediction; due to the nature of OOP languages like C++, Objective-C (I like this one), and D, this doesn't work. The way virtuals and class inheritence works, functions are necessarily dealt with as pointers; the function is pointed to by pointing to a master class object, basically. Here's a C reconstruction:

    struct myClass_members {
    struct myClass_members *(*alloc)(); // constructor
    void (*destroy)(struct myClass *); // destructor
    int (*my_member)(); // member function
    };

    struct myClass { // Data
    struct myClass_members *call; // Pointer to a list of members as above
    int my_value; // an integer value
    };

    What you do is initialize a constant myClass_members (called myClass_Object here) with a bunch of pointers to static functions in one source file; then call myClass_Object->alloc to create a new one (we'll call it my_inst). Then do my_inst->call->my_member() to call the member, and similarly my_inst->call->destroy(my_inst) to deallocate the class.

    Basically, OOP languages like C++ and Java use this methodology, but it's obscured through friendly syntax. What we can expose from the above is:

    1. When class members change, structs change. This causes binary incompatibility between different versions of libraries. The exception is Objective-C, which looks up members based on their name in a hash table generated at run-time
    2. The addresses of branches (specifically CALL to call a function) are indirect (yes, in Obj-C too); this means that you can replace the class with another class that has the same structure but different functions being pointed at. It also means that the CPU can't do branch prediction, which hurts pipelining and intelligent CPU caching, causing pretty big slow-downs.

    The whole "native execution speed" thing is bunk. Script languages are executed on a native bytecode interpreter, or JIT'd to native. The amount of work that goes into the execution is what you care about; as well as the utilization of the CPU's most powerful facilities. You can only justify OOP languages by saying that either A) the majority of the work doesn't involve making calls to other class members, and thus won't be hurt by this; or B) CPU speed doesn't matter. I hate argument (B); (A) I can accept, barely enough to tip my hat to you for having good software engineering sense.

  • by Animats ( 122034 ) on Monday January 01, 2007 @10:34PM (#17428000) Homepage

    I wasn't happy about that either. Garbage collection in a language with destructors leads to wierd semantics, which is why Microsoft's "Managed C++" is a nightmare. I corresponded a bit with Walter Bright in the early days of D, but didn't press the issue.

    What seems to work in practice is reference counting. GC gets most of the academic attention, but Perl and Python are both basically reference counted, and the result seems to be that programmers in those languages can ignore memory allocation. Java programmers have to pay a bit more attention, worrying about when GC will run and when finalizers will be called. Reference counting is deterministic; the same thing will happen every time, so timing is repeatable. That's not true of GC.

    There are two basic problems with reference counts - overhead and cycles. Overhead can be dealt with by hoisting reference count updates out of loops at compile time, so that you're not frantically updating reference counts within an inner loop. Hoisting (along with common subexpression elimination), by the way, is also the answer to subscript checking overhead.

    Cycles are a more serious problem. Conceptually, the answer is strong and weak pointers (in the Perl sense, not the Java sense), which allows the programmer to express things like trees. (Links towards the leaves should be strong pointers; back pointers towards the head should be weak pointers.)

    In practice, cycles aren't a serious problem, because they're generated by design errors and tend to happen in normal program operation, so they show up early in testing as memory leaks. Dangling pointers, on the other hand, tend to show up in error cases, which is why they survive testing to become delivered bugs.

    Ideally, you'd like to detect cycles at the moment they're created, at least for debug purposes. This is quite possible, although there's substantial overhead.

    Attempts to retrofit reference counting to C++ via templates have been made, but they are never airtight. To get anything done, raw pointers have to leak out, which makes the reference counting scheme very brittle.

  • by Anonymous Brave Guy ( 457657 ) on Monday January 01, 2007 @10:47PM (#17428106)

    Perhaps accidentally, you've just hit on one feature of programming language designs that I think does justify a new compiler front-end: ease of parsing for use with tools. Parsing the current monsters like C++ and Perl is so awkward and error-prone that few tools even get simple things like syntax highlighting 100% right (and the performance of those that do is... less than stellar). I imagine most of us are more interested in the underlying semantics of programming languages than in the specific syntax anyway, so can't we use a grammar that is easy to parse effectively, and then have tools from syntax highlighters to source code navigation to refactoring working quickly and reliably for a change?

  • by fish waffle ( 179067 ) on Monday January 01, 2007 @10:53PM (#17428138)
    The real problem might be false negatives: memory containing garbage not getting freed due to something appearing to point at it, without actually being a pointer?

    In theory yes, but in practice not very much; conservative gc works very well in most cases. The main drawbacks are:
    • No guarantee that any given bit of memory is collected. As mentioned in practice it's not typically much of an issue, but if you're developing a critical application riding close to the memory limit this may be a concern.
    • Memory fragmentation can be a problem for some long running programs.

  • by metamatic ( 202216 ) on Tuesday January 02, 2007 @12:54AM (#17428902) Homepage Journal
    So the only way to represent the concept of a class with an ordering but no equality is to have a run-time failure every time someone tries to compare them for equality?

    What do you think a programming language should do when I try to compare two things that can't be compared for equality?

  • Re:Keeping It Simple (Score:2, Interesting)

    by DropArk ( 957471 ) on Tuesday January 02, 2007 @02:23AM (#17429326)

    So where is the word simplicity in all of this? Anybody that has learned to use C++ really well has, to my mind, earned the equivalent of a Master's degree. This makes C++ a brilliant failure. So to learn D will we require the equivalent of a PhD?
    No. Definitely not. Compared to C++, Java, and C#, the D programming language syntax is clean and simple. Sure there are exceptions (the 'static' keyword is way too overloaded) but there aren't many of them. I'm guessing you haven't actually tried to use D on any non-simple problem yet. But if you have and still claim it lacks simplicity, I'd be very interested in knowing your exact and specific findings.

    Good designers make things both simpler and more powerful. They improve the product as much through subtraction as addition. Instead we get this...

    and Walter has continued to add more and more features.
    Bloody hell.
    I'm sure you know that many things that are simple to use are also complex under-the-hood. In order to make D simple to use, some very complex concepts have been implemented. Also, reducing syntax is not the same as increasing the simple-to-use factor. If you take a look at the Forth programming language, its syntax and keyword usage is extremely simple. And yet it is often called a write-only language.
  • by Randolpho ( 628485 ) on Tuesday January 02, 2007 @11:08AM (#17431584) Homepage Journal
    Fortran does actually have some very useful features not found in c or most other languages, especially when doing vector processing; it would be in your best interest to (*gasp*) learn the language rather than run f2c. Also, I would remind you that you compiled c and fortran are link-compatible, so you could create a function-interface specification that would allow you and your curmudgeon to work together, rather than cross-purposes.
  • by The_Dougster ( 308194 ) on Tuesday January 02, 2007 @11:24AM (#17431700) Homepage

    See http://dsource.org/projects/gentoo/wiki/LaymanSetu [dsource.org] p for a portage overlay that includes DMD-bin. You have to edit the layman configuration and disable warnings about missing fields, but it works fine after that.

    I tried it, and it added the package, but it is masked by missing keyword. Besides, I don't want to install dmd, which is the 32bit compiler from digital mars, I'd rather have the gcc addon gdc built into my existing 64bit compiler system. I currently don't have the 32bit emulation libs installed, and I don't plan on installing them either. I have a pretty cool system going that is 64bit clean and runs like my own personal supercomputer. I'm into engineering, science, and mathematics programming, so I really like the free extra precision with 64bit and sse3 not to mention the extra cpu registers.

    I might try and compile D (gdc) into /usr/local/gcc-4.0 again just to see if I can get it to work, but it is obvious that some patches need to get sent upstream for the amd64 platform. I'm not sure how best to implement the patches, but I would think that it should be done using autotools, and that config.h mechanism.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...