Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming

Haskell 2010 Announced 173

paltemalte writes "Simon Marlow has posted an announcement of Haskell 2010, a new revision of the Haskell purely functional programming language. Good news for everyone interested in SMP and concurrency programming."
This discussion has been archived. No new comments can be posted.

Haskell 2010 Announced

Comments Filter:
  • Re:Is it just me ? (Score:5, Interesting)

    by DragonWriter ( 970822 ) on Tuesday November 24, 2009 @07:16PM (#30220390)

    Well, all functional programming languages use recursion

    Most procedural programming languages use (or at least support) recursion, too. The difference is that (1) pure functional language cannot express certain algorithms with iteration instead of recursion because of the lack of mutable state, and (2) languages that don't support tail call optimization (at least in the trivial case of self-recursive tail calls) generally also don't efficiently support recursion, since recursion results in the accumulation of stack frames. A consequence of #1 and #2 is that pure functional languages almost without exception, as well as many less purely functional languages that designed to support programming in a functional style (e.g., Scheme), feature tail call optimization and have tail-recursive constructs as the most idiomatic way of expressing certain algorithms, whereas languages that aren't functional or designed with functional-style programming in mind, often have an iterative approach as the most idiomatic -- and most efficient -- expression of the same algorithms.

  • Re:Is it just me ? (Score:4, Interesting)

    by Hurricane78 ( 562437 ) <deleted @ s l a s h dot.org> on Tuesday November 24, 2009 @07:40PM (#30220672)

    The idea is that you can split up the program in parallel tasks in a fully automated way. If you as a programmer even have to think about parallelizing, I’m sorry, but then your compiler is “doin’ it wrong” and your languages is from the stone age. ^^
    An a bonus, when you can completely rely on a function with the same input producing the same output, you can also throw caching in there algorithmically (where required, on-demand, whatever you wish)
    But bla... that is all just the stuff on the surface. Like explaining the pointlessness of “metaprogramming” when there stops being a difference between data and code.

    I find the most amazing thing about Haskell as it is today, is that things that need the addition of new constructs to the language and a big change in the compiler, are just your normal library in Haskell. It can look like a whole new language. But it’s just a library. And that is amazing!
    Then, when you get to the GHC extensions, things that are as much on the forefront of Informatics science as the LHC on physics, with everybody else copying it years later... Sorry, but if you like elegance in programming, ...I just have no words for it...

    The thing is, that it’s crazy hard to learn. Which is not a fault in language design. Because it’s very elegant. It’s simply the fact that it is so far ahead of anything in your everyday language. You won’t expect to sit in a spaceship and drive it like your car too, would you? Or program the LHC like a VCR.

    Yes, I am a fan. Even if I sometimes hate it for being so damn smart compared to me the normal programmer. But I became so much a better programmer in all other languages, it’s crazy.

    It’s simply a completely different class of skill. And that is why one should learn Haskell. Fuck the “Oh, we’re now only coding in Haskell” attitude. When you really understand the ideas behind it, every language becomes Haskell. And you can write practically bug-free programs of...

    Aaah, what am I saying. <John Cleese>Oh it’s driving me mad... MAD!</John Cleese> *slams cleaver into the table*
    *head developer comes in*
    Head developer: Easy, Mungo, easy... Mungo... *clutches his head in agony* the war wound!... the wound... the wouuund...
    Manager: This is the end! The end! Aaargh!! *stabs himself with a steel lambda sculpture*

  • Re:Concurrency? (Score:4, Interesting)

    by radtea ( 464814 ) on Tuesday November 24, 2009 @09:38PM (#30221682)

    Well, pure functional languages are (potentially) good for concurrency in general. Because they have no mutable variables in the usual sense, it doesn't actually matter what order functions are evaluated in (other than the fact that callers cannot continue until their callees return).

    Maybe you can help me get past one of my mental stumbling-blocks with Haskell, which seems like a really cool language, but which I clearly have no clue about because I don't get a very fundamental thing. As I understand it there are two fundamental claims about Haskell:

    1) it is a "pure functional" language, which is therefore entirely and completely and "purely" side-effect-free. I appreciate the immense potential value of this for things like program verification, and I'd love to learn more about it.

    2) there is a Haskell construct that is part of the Haskell language called a "monad" that can have side-effects.

    I'm a deeply pedantic guy, and I'm unable to reconcile these two claims, and it puts me off looking more deeply into the language every time I read about it because there's clearly something I don't get. It seems to me that either:

    a) Haskell is not actually purely functional: it is a purely functional core sub-language with extremely well controlled additional side-effect-producing parts

    b) Monads are not actually considered "part" of the Haskell language, in the same way that pre-standardization STL was not "part" of the C++ language.

    c) I'm completely missing something.

    Enlightenment would be greatly appreciated.

  • by Pseudonym ( 62607 ) on Tuesday November 24, 2009 @09:42PM (#30221712)

    EmptyDataDeclarations allows for data types without a constructor... but to be perfectly honest I haven't quite figured out what practical benefit they have :). I'm sure there is a reason, but I don't think it's as trivial or obvious as you make out.

    Consider the tag struct idiom in C++:


    struct open_read_t {};
    struct open_read_write_t {};

    class File {
            File(const string& name, open_read_t); // Open for reading.
            File(const string& name, open_read_write_t); // Open for writing.
    };

    The tag structs are not used to carry data; their only purpose for existing is to disambiguate the two constructors.

    Similarly, in the STL, tag structs are also used to mark iterator categories, to make sure that when you could use more than one algorithm for a container, the most appropriate one is selected.

    This is essentially what empty data declarations are for in Haskell, except that by having no constructors, you can guarantee that they will never be instantiated. The most common use is in conjunction with phantom types [haskell.org].

  • by Cassini2 ( 956052 ) on Wednesday November 25, 2009 @12:52AM (#30222910)

    And once you have closures, you need garbage collection.

    You may have explained why Haskell will never work for certain HPC applications. Essentially, for really high-speed CPU performance, static variables are essential. A static variable has the nice property of being at an immutable memory location, which enables all sorts of optimizations. Additionally, once it is shown that a memory location is fixed, and not accessed by another function, then important additional optimizations are allowed, including optimizing out instructions involving constants.

    Modern compilers can only optimize x = a + b * y when a and b are simple constants (often 1 or 0), and x, and y are simple variables (like double precision multiplies). If x and y are arrays, or classes, modern compilers won't unroll the loops because it could take a very large amount of time to complete the compile. However, if the loops were unrolled, then the resulting program would execute much quicker.

    This subtlety is very important in real-time control systems, where speed is paramount. The mathematicians generate an "optimized" control equation by writing y = A B C x D E, where A, B, C, D, and E are matrices, and x and y are vectors. The PC based simulators are very slow in computing y, because of all the large matrix multiplications. However, if you expand the equation out into the individual multiplies, it turns out the compiler can make huge simplifications, which is why the math people do all of those matrix multiplies. These simplifications only happens when one expresses the actual multiplication and additions to the C compiler as simple statements, like double precision multiplies of static, constant, and function local variables. The C compiler doesn't know how to optimize loop matrix multiplies.

    These optimizations only work if key loops in the program are unrolled at compile time. If the language is so complex, that the compiler cannot even eliminate all dynamic memory allocations, then it is impossible to unroll key loops. A limited language like FORTRAN with MPC/HPC extensions can become very fast relative to Haskell, as the compiler has much more freedom to do optimizations.

    Simply put: a compiler designed for a limited problem set (FORTRAN) can be much faster than a fully general programming language for any possible program (Haskell).

  • by j1m+5n0w ( 749199 ) on Wednesday November 25, 2009 @01:54AM (#30223174) Homepage Journal

    Let's see if I can explain this simply.

    The Haskell language, like any other language, needed constructs like "read" and "write", but to implement them as simple functions would have broken the underlying assumptions of purity and lazy evaluation.

    Haskell happened to have monads. A monad is essentially a typeclass for containers, that allow you to do certain things to combine containers of the same type, without having to worry about what kind of container it is. Most (all?) of the containers in the standard library are instances of Monad.

    The Haskell language designers came up with (or perhaps borrowed) an idea. They would create a new container type, called IO, and make it an instance of Monad. However, unlike other containers, it would not have any accessor functions. You can pass around an object around of type IO in pure code all you want, but you can't ever examine the contents of the IO container from within "pure" code. The only thing you can do with it is combine it with other IO objects. Combining two IO objects is equivalent to evaluating the file operation or what have you inside one IO object and passing it's result to and executing whatever is inside the second IO object. The actions within an IO object, however, are free to invoke pure code if they like.

    Every haskell program has a main() function, which is an IO action. This allows you do do any necessary file IO your program needs to do, and it can also call out to pure functions. Pure function cannot invoke IO actions. Most Haskell programmers try to keep the IO actions as simple as possible and rely on pure code for the bulk of the program.

    As a concrete example, I wrote a ray tracer, which parsed a text file and generated an image. As I was writing it, I got to the part where I needed to write the file parser. I thought "oh, no, this whole thing has to execute within the IO monad and it'll be a big mess". However, it was not so. After scratching my head a bit, I ended up writing a pure function that takes a simple text string and converts it to my internal representation of a scene, ready to be ray traced. Within main (within the IO monad), I would read the text file in with a lazy function "hGetContents", which returns a string which is the contents of the file. I would pass that string to the parser, and then trace a grid of rays (one per pixel) against the parsed scene. The list of pixels with their calculated color values was returned to the IO monad, where I used OpenGL to plot them to the screen.

    The interesting bit about this is that hGetContents is lazy. In a strict (i.e. non-lazy) implementation, the whole string would be read at once. This is inefficient, and may cause problems for text files that won't fit in memory. Due to laziness, however, the string is passed into the parser without being fully evaluated. As the parser needs more data, the run-time system will cause hGetContents to read another block. So, here we have an example of a pure function that's indirectly triggering IO, and it's doing it all without violating the constraints of the type system.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...