Haskell 2010 Announced 173
paltemalte writes "Simon Marlow has posted an announcement of Haskell 2010, a new revision of the Haskell purely functional programming language. Good news for everyone interested in SMP and concurrency programming."
Is it just me ? (Score:4, Insightful)
I admit that a function with no side effects can ease making things thread safe, but are recursive style functional programming languages really used that much in "SMP and concurrency programming" nowadays. I wrote some code in that category but I never envisioned a functional programming language would suit the job. Am I the only one ? ;-))
Thanks for your replies,
Re:Is it just me ? (Score:5, Insightful)
Functional languages are enjoying an enormous renaissance in the field of multithread, multicore and/or multiprocessor environments.
There are a few really major obstacles to doing multi-* well. The major theoretical one is Amdahl's Law, which puts some extreme limits on how much multi-* can help you. The major practical one is our current obsession with side-effect languages. We need major theoretical advancements to get past Amdahl's Law (if, in fact, it can be gotten past at all). Functional programming is a great way to get past side-effect-based programming, though.
In a proper functional language, there is no such thing as iteration. There can't be. The instant you say, "each time through the loop, increment the counter, and as soon as it hits this certain value stop," you have stopped programming in a functional style.
As an example of some really cool concurrent languages that emphasize recursion and functional programming, look at Clojure, Scala, Erlang and F#. All of these languages provide iteration and other side effect techniques if you really need them -- but all of these languages emphasize recursion.
Re:Is it just me ? (Score:5, Informative)
Re: (Score:2)
This is also known as the No Free Lunch Theorem and traces right back to Godel, Turing, and the foundations of all mathematics. You are right, it certainly isn't going anywhere.
Re: (Score:3, Informative)
You misunderstand what he means by "getting past" Amdahl's Law. That wouldn't involve somehow changing the fact that only speeding up part of a program can only do so much to speed the whole program- it'd involve radically changing the proportion of various algorithms which can be parallelized. For instance, if somebody were to come up with a constructive proof that P [wikipedia.org]=NC [wikipedia.org] that'd certainly do the trick (though most people's guess is that that's false).
Re: (Score:2)
Re: (Score:2)
You're correct when you say that if you parallelize 50% of your algorithm, you still have to deal with the other 50%. However, new theoretical discoveries are made all the time in algorithm design; new theoretical advances may result in an equivalent algorithm that's 90% parallelizable.
Amdahl's Law is a strong mathematical constraint on the maximum improvement in performance for a particular algorithm. In order to get past Amdahl's Law, use a different algorithm.
I freely confess to having used "get past A
Re: (Score:3, Insightful)
Amdahl's law is not like Moore's "law". Amdahl's law is an observation of mathematics. You can't ever get around the fact that if you increase the performance of 90% of the instructions in a program, you still have to deal with the other 10%. Even if you increase the performance of 90% of the instructions by 100x or something large, if the other 10% take a long time (like disk access) its going to kill your performance.
It makes the implicit assumption that there is a '10%'. The whole point of some pure functional languages is that essentially all of it can be parallelized, 99% or more in some cases. There are programs where the sequential part is 0.001%, if that.
However, another comeback I've heard is that the "end goal" of parallelization is not necessarily to do the same thing in less time, but to do more in the same time. That is, you increase the "parallelizable" bit while keeping everything else constant. For example
Re: (Score:2)
Re: (Score:2)
Obviously your wrong because of the law of thermodynamics.
Yes, because this has something to do with thermal equilibrium, conservation of energy, perpetual motion machines, or cooling stuff to absolute zero.
In case you are confused, Wikipedia has a link [wikipedia.org] for that.
Re: (Score:2)
Re: (Score:2)
Actually, it's because functional programming is hard and not typically taught well in school. People learn imperative languages first, and tend to end up thinking that way. Lots of functional languages such as O'Caml compile to very fast runtimes.
Why isn't FP more popular? (Score:2)
There are several reasons. As abigor stated, functional programming is not well understood by the majority of programmers. If you suppose that there is one Haskell programmer for every 1000 Java programmers, then that one Haskell programmer has to be one thousand times more productive in order to get noticed by the larger programming community. (I don't know if the distribution is that skewed, but I think you get the idea.) There are small compani
Re: (Score:2)
No, this isn't a reason. Open source software on Unix-clones is about the only place C is still a general purpose language. Ocaml and Haskell are eminently competitive with Java and C# (and with C too, in these domains).
Re: (Score:2)
I guess not, since that's what these folks put together.
Seriously though, it's my experience that there aren't too many people out there writing much with concurrency and SMP in mind to start with. I'm not sure how they did it, but if they can bring parallelism to another method of programming, then fine by me. The more people get used to utilizing the multiple cores available to them, the better IMHO.
Re:Is it just me ? (Score:4, Informative)
I'm not sure what you mean by "recursive style", but the biggest commercial users of functional programming languages tend to be companies behind high-traffic websites that need to handle a lot of concurrent requests. Facebook developed their real-time chat application in Erlang, for instance, and Twitter uses Scala to handle its message queue.
Re:Is it just me ? (Score:4, Informative)
> I'm not sure what you mean by "recursive style",
Look at Quicksort in Haskell :
qsort [] = []
qsort (x:xs) = qsort (filter (= x) xs)
This is what I mean, no loops, recursion. I used Prolog and ML to solve logic problems and it is pretty handy. Prolog is especially suited for solving logic problems ( "logic programming" ).
Re: (Score:2)
Hehe /. screwed the Haskell code, look here for the source :
http://www.haskell.org/haskellwiki/Introduction#Quicksort_in_Haskell [haskell.org] ;-)))
Re: (Score:2)
> I'm not sure what you mean by "recursive style",
Look at Quicksort in Haskell :
qsort [] = []
qsort (x:xs) = qsort (filter (= x) xs)
This is what I mean, no loops, recursion.
Well, all functional programming languages use recursion, so "recursive style functional programming languages" is a bit redundant :)
Re:Is it just me ? (Score:5, Interesting)
Most procedural programming languages use (or at least support) recursion, too. The difference is that (1) pure functional language cannot express certain algorithms with iteration instead of recursion because of the lack of mutable state, and (2) languages that don't support tail call optimization (at least in the trivial case of self-recursive tail calls) generally also don't efficiently support recursion, since recursion results in the accumulation of stack frames. A consequence of #1 and #2 is that pure functional languages almost without exception, as well as many less purely functional languages that designed to support programming in a functional style (e.g., Scheme), feature tail call optimization and have tail-recursive constructs as the most idiomatic way of expressing certain algorithms, whereas languages that aren't functional or designed with functional-style programming in mind, often have an iterative approach as the most idiomatic -- and most efficient -- expression of the same algorithms.
Re:Is it just me ? (Score:5, Informative)
I would add a couple other "must have" features in functional languages:
* The ability to pass a function as an argument to another function (i.e. higher order functions, like qsort in the C standard library).
* Support for a points-free programming style in which things can be passed from one function to another without naming them.
Some other features that perhaps aren't technically required but make functional programming a lot easier:
* Closures, local function definitions, garbage collection, partial evaluation of function.
Re: (Score:2)
Re: (Score:2)
First of all, for a language to be called "functional", you need first-class functions (duh). What this means is that a function is "just another value" - it can be passed around in function arguments and returned, assigned/bound to a variable, and so on. But this also generally means first-class "function literals" - anonymous functions, lambdas, and so on - and for that you need closures. Furthermore, once you get closures and function-typed return values, you have to have some form of automatic memory ma
But to what end? (Score:2)
things can be passed from one function to another without naming them.
Isn't that a rather point-less feature? ;-)
Re: (Score:2, Interesting)
You may have explained why Haskell will never work for certain HPC applications. Essentially, for really high-speed CPU performance, static variables are essential. A static variable has the nice property of being at an immutable memory location, which enables all sorts of optimizations. Additionally, once it is shown that a memory location is fixed, and not accessed by another function, then important additional optimizations are allowed, including
Lua (Score:2)
I would consider Lua a functional language (and I think it's designers went out of their way to make it so), though it isn't exclusively functional; it is possible (and typical) to write imperative, stateful code in Lua as well. I haven't used it much, though, so it may be that there is some drawback to using a functional style that I haven't considered.
Re: (Score:3, Informative)
I guess I was thinking more of the fairly universal concept that you can use the result of one function as an argument to another without giving it a name. For instance:
result = f(g(x))
instead of
foo = g(x); result = f(foo)
Haskell applies a points-free style in more cases than we're used to seeing in other languages, which is sometimes useful but I agree it is just as often an annoyance to those who are trying to understand someone else's code. I was thinking of a rather looser definition of points
Re: (Score:2)
Yes, I realise all that. I was just pointing out that all functional languages use "recursive style", as ls671 puts it.
Re: (Score:2)
Almost all programming languages support recursion so recursion is recursively redundant ! ;-))
Re: (Score:3, Insightful)
Except that
1. Slashdot thought you're writing HTML and ate half of your code
2. Put it into a module and load in ghci, then try something like "qsort [1..10000]". Watch the slowness.
3. The efficient implementation in Haskell uses iteration and is 100 times less readable than equivalent code in C.
I really like Haskell, but this is not the best example of its strenghts.
on haskell quicksort (Score:2)
I agree it's not always the best example, but there are four reasons why "qsort [1..10000]" is slow, and you can address three of them without writing something that's "100 times less readable than C".
The first problem is that naive quicksort on an already-sorted list/array is O(N^2). The fix is to grab an element out of the middle and use that as your pivot. That alone will make qsort far faster. (I tried it.*) The resulting code is, of course, a bit messier.
Secondly, the qsort function has no type
Re: (Score:2)
Right, quicksort is almost always the wrong answer if you're sorting linked lists, as you are here.
But consider a more complex algorithm, such as bottom-up polyphase merge sort. (WARNING: Untested code follows.)
sort = mergePhase . generateRuns
generateRuns [] = []
generateRuns (x:xs) = split xs x x (x:)
where
split [] _ _ _ k = k []
split (x:xs) minx maxx k
Re: (Score:2)
Re: (Score:2)
Thank you for your explainations. I wasn't aware that the example I've used was the worst case. I think I'd better go educate myself a little more on algorithms and data structures :D
Re: (Score:2)
The fourth reason is that C is doing in-place array swaps.
And the ability to do that is the whole reason for quicksort. If you do quicksort without doing it in-place, you're just being silly.
Re: (Score:2)
Quicksort with in place arrays is O(nlogn). Quicksort with lists is O(nlogn). The difference between the two is a constant factor. Not everyone cares enough about constant factor speed increases that they would abandon a programming language because of it, but some do which was the point I was trying to make.
Using lists just means that quicksort isn't any better than any of the other nlogn sorts, but it's still one of the easiest efficient sorting methods to explain. Mergesort is usually preferred in
Re: (Score:2)
Not everyone cares enough about constant factor speed increases that they would abandon a programming language because of it
First, it's a constant factor in speed, but not in size. Second, nobody said anybody was abandoning any languages because of it, only that you shouldn't go around showing it off as a "quicksort".
Re: (Score:2)
That was not the point. The point was that this implementation of quicksort is bad. It could be argued it is not even a true quicksort.
Re:Is it just me ? (Score:5, Funny)
Re: (Score:3, Funny)
See: Recursive
Re: (Score:2)
Facebook developed their real-time chat application in Erlang
Do you normally like to make your opponent's point for them? Facebook chat is horrible and buggy in my experience. Messages go missing. It freezes Firefox. It's just plain awful!
Re:Is it just me ? (Score:4, Informative)
Re: (Score:2)
And Scala would be the same. The Odersky book describes it as a hybrid, but the force of the language is functional in nature. I'm not intimate with Erlang, but I think most knowledgeable commentators would use almost identical words for Scala as you use for Erlang. It does allow mutables, but de-emphasizes them. It treats functions as first class objects. Recursion is the preferred approach to repetition. There may be some higher canonical definition that it violates, but it looks pretty functional t
Re: (Score:2)
Not really. In fact, I think its much more useful to discuss the value of functional languages in terms of the benefits of specific features and differences of degree or, rather than just a binary categorization.
Sure, in the same sense that a pit of mud can be considered a non-pure body of water. Likewise, one can meani
Re:Is it just me ? (Score:5, Informative)
Erlang and Scala are both languages designed to support programming in a functional style, and both are "recursive style" languages in that they optimize tail calls generally (Erlang) or at least in the most important case of self-recursive calls (Scala) and thus make tail-recursive (or, in Erlang's case, more general tail calling with unlimited depth) programming styles efficient.
Neither is a pure functional programming language, true.
Re: (Score:2)
Exactly.
Erlang actually owes more to Linda and Prolog than to languages that we usually think of as "functional" (like the Lisp/Scheme family and the ISWIM family). But still, Erlang is functional in the same way that Java is object-oriented and C++ is generic.
If you've ever programmed in a "true" object-oriented language, like Smalltalk or Eiffel, Java seems clunky. If you've ever used a programming language with "true" parametric polymorphism, like ML or Haskell, C++ seems clunky. (Actually, they seem
Re: (Score:2)
If by "recursive style functional programming languages" you mean (more or less) functional programming languages that support tail call optimization so that tail calls, including self-recursive calls, are efficient, then the answer is "yes".
Besides Haskell, Erlang, for instance, is such a language , and
Re: (Score:2)
Interestingly enough, there is talks about implementing “soft tail call” and "“hard tail call” optimization in the JVM:
http://blogs.sun.com/jrose/entry/tail_calls_in_the_vm [sun.com]
Re: (Score:2)
Whether it works or not depends on what your task is and how you try to solve it. There are several ways to do concurrency in Haskell, and they're each appropriate in different instances. If you're just calling a lot of pure functions, then "par" and "seq" might be what you need. If you need shared, mutable state, then STM might be a good approach. If you want to do mes
Re:Is it just me ? (Score:4, Interesting)
The idea is that you can split up the program in parallel tasks in a fully automated way. If you as a programmer even have to think about parallelizing, I’m sorry, but then your compiler is “doin’ it wrong” and your languages is from the stone age. ^^
An a bonus, when you can completely rely on a function with the same input producing the same output, you can also throw caching in there algorithmically (where required, on-demand, whatever you wish)
But bla... that is all just the stuff on the surface. Like explaining the pointlessness of “metaprogramming” when there stops being a difference between data and code.
I find the most amazing thing about Haskell as it is today, is that things that need the addition of new constructs to the language and a big change in the compiler, are just your normal library in Haskell. It can look like a whole new language. But it’s just a library. And that is amazing! ...I just have no words for it...
Then, when you get to the GHC extensions, things that are as much on the forefront of Informatics science as the LHC on physics, with everybody else copying it years later... Sorry, but if you like elegance in programming,
The thing is, that it’s crazy hard to learn. Which is not a fault in language design. Because it’s very elegant. It’s simply the fact that it is so far ahead of anything in your everyday language. You won’t expect to sit in a spaceship and drive it like your car too, would you? Or program the LHC like a VCR.
Yes, I am a fan. Even if I sometimes hate it for being so damn smart compared to me the normal programmer. But I became so much a better programmer in all other languages, it’s crazy.
It’s simply a completely different class of skill. And that is why one should learn Haskell. Fuck the “Oh, we’re now only coding in Haskell” attitude. When you really understand the ideas behind it, every language becomes Haskell. And you can write practically bug-free programs of...
Aaah, what am I saying. <John Cleese>Oh it’s driving me mad... MAD!</John Cleese> *slams cleaver into the table*
*head developer comes in*
Head developer: Easy, Mungo, easy... Mungo... *clutches his head in agony* the war wound!... the wound... the wouuund...
Manager: This is the end! The end! Aaargh!! *stabs himself with a steel lambda sculpture*
Re: (Score:2)
This is not true in high-performance computing. If you don't want your CPU spinning its wheels on locks and cache misses, then you need to go quite low level -- and really think about the data structures you're using, and how to paralli
Re: (Score:2)
The idea is that you can split up the program in parallel tasks in a fully automated way. If you as a programmer even have to think about parallelizing, I’m sorry, but then your compiler is “doin’ it wrong” and your languages is from the stone age.
This is not true in high-performance computing. If you don't want your CPU spinning its wheels on locks and cache misses, then you need to go quite low level -- and really think about the data structures you're using, and how to parallise tasks with next to no interthread communication.
The point of functional programming is that you don't have to think about locks or how to manage contention on your data structures because there is no such thing such thing as mutable data. And if you're just letting the compiler do the parallelization, you don't have to think about interthread communication, either, because the compiler will automatically find the places where function calls can be parallelized with zero interthread communication. Of course, if you structure your code so that it's inher
Re: (Score:2)
If you are thinking of the functional effects of the code then it makes sense to consider it as instructions in an impe
Re: (Score:2)
Re: (Score:2)
Great, now you get a whole bunch of cache misses, and your CPU is sitting at 10%.
Re: (Score:2)
The point of functional programming is that you don't have to think about locks or how to manage contention on your data structures because there is no such thing such thing as mutable data. Great, now you get a whole bunch of cache misses, and your CPU is sitting at 10%.
Actually, cache utilization by functional programs tends to be excellent. The compiler optimizes most of the duplication of data away in tight loops, so where it matters for performance, data actually is modified in place.
Really, you should learn something about functional programming before dismissing it.
Re: (Score:2)
My argument is, that you can put this thoughts into general transformational formulas, and put them in the compiler. :)
I might be wrong, but at least I will try and find out / prove it. :)
Amazing day... (Score:5, Funny)
Re: (Score:2)
Yeah, and you may thank them when attached to a heart-lung machine that was NOT written in C/C++ instead, because B. Curry’s gonna kick yo ass with that attitude! ;P
Re: (Score:2)
I do know of a vehicle braking system that was coded in C, but the C was generated by a Haskell code generator. I wouldn't be at all surprised if that sort of thing was quite common.
Reminds me of Life of Brian... (Score:3, Funny)
"The current committee is still discussing how to go about finding a new committee"
A fitting name for this Haskell programming language might have been "Python" hadn't it all ready been taken.
Re:Reminds me of Life of Brian... (Score:4, Funny)
A fitting name for this Haskell programming language might have been "Python" hadn't it all ready been taken.
Actually, naming it "Python" would be doubly-fitting.. in fact, I think we should rename *all* programming languages "Python", just so it doesn't get confusing :)
Re:Reminds me of Life of Brian... (Score:5, Funny)
Re: (Score:3, Funny)
Well that's one way to ensure that Python retains lambdas, map and filter.
But I'm lazy... (Score:5, Funny)
So I don't think I'll look at the article until I actually need to program in Haskel....
Re: (Score:2, Redundant)
Re:But I'm lazy... (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
But learning Haskell is only in a small part about writing Haskell. It’s mainly about getting to the next level in programming skill. And becoming much better in every language.
Besides: Want to earn some big money for doing actual “mission-critical” software, that lives depend on, and that really means something? There’s no way to do serious business like that without Haskell, and not go crazy. ^^
So there you go. If you want to continue doing basically a front-end to a database on co
Oh, be still my heart.... (Score:3, Funny)
Haskell! Haskell! Every geek's favorite mental masturbation toy!
meh (Score:2)
screw all that syntactic sugar, lambda expressions ftw!
NoNPlusKPatterns (Score:2)
NoNPlusKPatterns... Seems that they're finally banned.
http://www.mail-archive.com/haskell@haskell.org/msg01261.html [mail-archive.com]
Ward, don't be too hard on the Beaver (Score:2)
The revision to which I look forward is the hack that causes it to make very impolite comments to Mrs. Cleaver.
Haskell cannot do in-place qsort (Score:2)
Can it; I think not. It's incredibly cool to write qsort in just 2 lines of code, but it's incredibly slow.
Re: (Score:2)
Rather than type what I wrote in a similar thread again, I'll just link to it here [slashdot.org].
Summary: qsort [1..10000] is slow not so much because it isn't an in-place sort, but because naive quicksort on an already-sorted list is O(N^2) rather than O(NlogN). If you use an element from the middle of the list as the pivot instead of the first element, it performs much better.
As the other reply already stated, you can do in-place quicksort in Haskell by using the ST monad, if that's really what you want.
Re: (Score:2)
Re: (Score:2)
Moreover, the DoAndIfThenElse extension merely extends the "do" syntax with if/then/else clauses; it is syntactic sugar. Empty data declarations are essentially union types unioned over no labels and therefore can have no values except for the bottom value.
References:
http://hackage.haskell.org/trac/haskell-prime/wiki/DoAndIfThenElse [haskell.org]
http://hackage.haskell.org/trac/haskell-prime/wiki/EmptyDataDecls [haskell.org]
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:3, Interesting)
EmptyDataDeclarations allows for data types without a constructor... but to be perfectly honest I haven't quite figured out what practical benefit they have :). I'm sure there is a reason, but I don't think it's as trivial or obvious as you make out.
Consider the tag struct idiom in C++:
struct open_read_t {};
struct open_read_write_t {};
class File { // Open for reading. // Open for writing.
File(const string& name, open_read_t);
File(const string& name, open_read_write_t);
};
The tag structs are not used to carry data; their only purpose for existing is to disambiguate the two constructors.
Similarly, in the STL, tag structs are also used to mark iterator categories, to make sure that when you
Re:Concurrency? (Score:5, Funny)
Re: (Score:2)
Re:Concurrency? (Score:5, Insightful)
Well, pure functional languages are (potentially) good for concurrency in general. Because they have no mutable variables in the usual sense, it doesn't actually matter what order functions are evaluated in (other than the fact that callers cannot continue until their callees return). You can't do this in C or Java because it might be necessary for one function to see a variable modified by another. In a functional language, any dependencies are explicit call-return relationships (well, ish, they typically do have some non-functional constructs otherwise it's hard to do IO!), so in principle it's quite easy to split up a program's work across multiple CPUs (or machines) and not worry about whether they need to talk to each other.
Haskell, along with the ML family of languages, also has an amazing type checker that is waaay more sophisticated than most other languages. I think most people who've played around with these languages do start to feel that often "If it compiles, it's bug-free". Obviously that's not something you can rely on, since the compiler can't know what you meant to do. But it is true that the type system is *way* more useful at detecting bugs at compile-time than for any conventional language I've used.
Re:Concurrency? (Score:4, Interesting)
Well, pure functional languages are (potentially) good for concurrency in general. Because they have no mutable variables in the usual sense, it doesn't actually matter what order functions are evaluated in (other than the fact that callers cannot continue until their callees return).
Maybe you can help me get past one of my mental stumbling-blocks with Haskell, which seems like a really cool language, but which I clearly have no clue about because I don't get a very fundamental thing. As I understand it there are two fundamental claims about Haskell:
1) it is a "pure functional" language, which is therefore entirely and completely and "purely" side-effect-free. I appreciate the immense potential value of this for things like program verification, and I'd love to learn more about it.
2) there is a Haskell construct that is part of the Haskell language called a "monad" that can have side-effects.
I'm a deeply pedantic guy, and I'm unable to reconcile these two claims, and it puts me off looking more deeply into the language every time I read about it because there's clearly something I don't get. It seems to me that either:
a) Haskell is not actually purely functional: it is a purely functional core sub-language with extremely well controlled additional side-effect-producing parts
b) Monads are not actually considered "part" of the Haskell language, in the same way that pre-standardization STL was not "part" of the C++ language.
c) I'm completely missing something.
Enlightenment would be greatly appreciated.
Re: (Score:2)
Heh, to be honest I'm mostly in the same boat as you - how can Haskell do side-effect stuff whilst still being purely functional. And what on earth is a Monad!?
A programmer friend of mine used to say that using monads to emulate state would be a bit like the following: imagine you want to emulate state but you're programming functionally. You want variables but you can't have them. Instead you have a big tuple of values representing the contents of all the "variables" and you pass this tuple to every fun
purity, side-effects, and monads explained (Score:5, Interesting)
Let's see if I can explain this simply.
The Haskell language, like any other language, needed constructs like "read" and "write", but to implement them as simple functions would have broken the underlying assumptions of purity and lazy evaluation.
Haskell happened to have monads. A monad is essentially a typeclass for containers, that allow you to do certain things to combine containers of the same type, without having to worry about what kind of container it is. Most (all?) of the containers in the standard library are instances of Monad.
The Haskell language designers came up with (or perhaps borrowed) an idea. They would create a new container type, called IO, and make it an instance of Monad. However, unlike other containers, it would not have any accessor functions. You can pass around an object around of type IO in pure code all you want, but you can't ever examine the contents of the IO container from within "pure" code. The only thing you can do with it is combine it with other IO objects. Combining two IO objects is equivalent to evaluating the file operation or what have you inside one IO object and passing it's result to and executing whatever is inside the second IO object. The actions within an IO object, however, are free to invoke pure code if they like.
Every haskell program has a main() function, which is an IO action. This allows you do do any necessary file IO your program needs to do, and it can also call out to pure functions. Pure function cannot invoke IO actions. Most Haskell programmers try to keep the IO actions as simple as possible and rely on pure code for the bulk of the program.
As a concrete example, I wrote a ray tracer, which parsed a text file and generated an image. As I was writing it, I got to the part where I needed to write the file parser. I thought "oh, no, this whole thing has to execute within the IO monad and it'll be a big mess". However, it was not so. After scratching my head a bit, I ended up writing a pure function that takes a simple text string and converts it to my internal representation of a scene, ready to be ray traced. Within main (within the IO monad), I would read the text file in with a lazy function "hGetContents", which returns a string which is the contents of the file. I would pass that string to the parser, and then trace a grid of rays (one per pixel) against the parsed scene. The list of pixels with their calculated color values was returned to the IO monad, where I used OpenGL to plot them to the screen.
The interesting bit about this is that hGetContents is lazy. In a strict (i.e. non-lazy) implementation, the whole string would be read at once. This is inefficient, and may cause problems for text files that won't fit in memory. Due to laziness, however, the string is passed into the parser without being fully evaluated. As the parser needs more data, the run-time system will cause hGetContents to read another block. So, here we have an example of a pure function that's indirectly triggering IO, and it's doing it all without violating the constraints of the type system.
hGetContents is only *technically* typesafe (Score:3, Informative)
Actually hGetContents cheats by using unsafeInterleaveIO. It's a cool hack, but lazy IO is an inherently risk business. For instance, if your program later wrote back changes to your scene file, the result is undefined. There's no guarantee of when the progra
Re: (Score:2)
Re:Monads (was Concurrency?) (Score:2)
I think that experienced haskellers often forget to explain that there is a portion of the program that is not strictly functional. The thing is that the programmer is not given access to it. Instead, the programmer is asked to pass around descriptions of the I/O actions to be taken. A monad is a data type that (amongst a vast number of other things) can be used to structure these descriptions so that we can get the order of execution right. (notice that I didn't say 'evaluation')
The next part is a bit s
Re: (Score:2)
A monad is not a specific data structure (like an array) or a language feature (like a loop or conditional). It is not something that is put together by the compiler (like a compiled function) and it is not magic, either.
"Being a monad" is a mathematical property, like "being commutative" or "being symmetric". It has to do with values following a particular pattern, having operations defined on them that interact in a particular way. (These are called "monad laws", and they're the same sort of thing as t
Re: (Score:2)
I come from an imperative background myself. Let's see if my explanation of I/O and state handling in pure funtional languages, based on imperative metaphors, makes more sense for you. Please let me know if you find the following useful, I might expand it to my own personal "monad tutorial" (this seems to be a tradition when learning Haskell).
- Monads are program structures that represent processes (and one particular kind of process for each kind of monad), but in a different way than functions represent p
Re: (Score:2)
Here is a philosophical view of the whole thing. You can think of effectful computations as functions from a World to a World where a World contains everything that we consider mutable (including RAM, the disk, the internet and so on). For example, putStrLn takes a World and a string and produces a World which is exactly like the old one except that the string has been written to the standard output.
Of course, the two Worlds - the one in which the string hasn't been output yet and the one in which it has
Re: (Score:2)
Monads in haskell are pure. The IO monad lets you paste together IO values in a way that they remain opaque.
After staring at this explanation for fifteen minutes I feel deep sympathy for the people on /. who try to understand my comments on physics, because I imagine they make as little sense to them as this does to me :-D
I think I kind of understand what you're saying: a monad is created on-the-fly by the (non-Haskell) runtime as an immutable object.
I'm still tempted to quibble at the difference between "
Re: (Score:2)
I think an easier way to think of this is to think about how another functional language, Concurrent Clean, does side effects. It has an object "world" (and, with typing restrictions, ensures that you can only ever have one copy of "world"). If you want to do a side effect like write to a file, you need to give it (the only copy of) "world" and it will hand you back a changed "world" to reflect that it did something magical. I.e., all functions with side-effects require this passing of "world" around.
Well
Re: (Score:2)
Haskell, along with the ML family of languages, also has an amazing type checker that is waaay more sophisticated than most other languages. I think most people who've played around with these languages do start to feel that often "If it compiles, it's bug-free". Obviously that's not something you can rely on, since the compiler can't know what you meant to do. But it is true that the type system is *way* more useful at detecting bugs at compile-time than for any conventional language I've used.
Actually, I dare say that Haskell type system is in fact a much bigger deal than its purity or laziness. Type classes are an awesome generalization of OO interfaces that let one define virtually arbitrary operators in a generic way, and then define generic algorithms using those operators, in an entirely type-safe way (and not using the C++ template model of "macros with a quick sanity check at declaration point and horrible error messages if anything goes wrong later"). They can be used to fully emulate c
Re: (Score:2)
they have no mutable variables in the usual sense ... You can't do this in C or Java because it might be necessary for one function to see a variable modified by another.
I think the key word here is "might".
Nothing prevents a C compiler from figuring out "int foo(int a) { return a + 2; }" is pure. In fact gcc can do this to some extent; the relevant compiler flag (enabled by default w/optimization) is "-fipa-pure-const".
gcc also lets you specify the attribute 'const' to declare that a function is pure (in the sense that we're using it here).
Sure, it's coarse, and an afterthought, but it's also flexible.
Re: (Score:2)
That's cool, I wasn't aware of that. Seems like it might be useful for scientific code. I know Fortran includes the ability to declare pure functions and (I think) verify their purity, which sounds useful.
Though I imagine the problem for larger computations could be that you're limited to writing functional-style code, at which point you might be better off using a language designed for it? What optimizations does gcc actually do on pure functions, having identified them? I think it would be very intere
Re: (Score:2)
Yeah the revisions are really tiny (well except for FFI, which also doesn't relate to concurrency). I think the Slashdot poster was trying to say that Haskell in general is nice for concurrency, not these revisions specifically.
With the exception of FFI, these revisions are dreadfully boring. It would be like if a new C standard came out that allowed you to write "floating" instead of just "float". That's about on par with the magnitude of the changes Haskell 2010 brings in :P
Re: (Score:2)