Forgot your password?
typodupeerror
This discussion has been archived. No new comments can be posted.

Haskell 2010 Announced

Comments Filter:
  • Is it just me ? (Score:4, Insightful)

    by ls671 (1122017) * on Tuesday November 24, 2009 @06:05PM (#30219458) Homepage

    I admit that a function with no side effects can ease making things thread safe, but are recursive style functional programming languages really used that much in "SMP and concurrency programming" nowadays. I wrote some code in that category but I never envisioned a functional programming language would suit the job. Am I the only one ? ;-))

    Thanks for your replies,

    • Re:Is it just me ? (Score:5, Insightful)

      by rjh (40933) <rjh@sixdemonbag.org> on Tuesday November 24, 2009 @06:25PM (#30219722)

      Functional languages are enjoying an enormous renaissance in the field of multithread, multicore and/or multiprocessor environments.

      There are a few really major obstacles to doing multi-* well. The major theoretical one is Amdahl's Law, which puts some extreme limits on how much multi-* can help you. The major practical one is our current obsession with side-effect languages. We need major theoretical advancements to get past Amdahl's Law (if, in fact, it can be gotten past at all). Functional programming is a great way to get past side-effect-based programming, though.

      In a proper functional language, there is no such thing as iteration. There can't be. The instant you say, "each time through the loop, increment the counter, and as soon as it hits this certain value stop," you have stopped programming in a functional style.

      As an example of some really cool concurrent languages that emphasize recursion and functional programming, look at Clojure, Scala, Erlang and F#. All of these languages provide iteration and other side effect techniques if you really need them -- but all of these languages emphasize recursion.

      • Re:Is it just me ? (Score:5, Informative)

        by EMG at MU (1194965) on Tuesday November 24, 2009 @07:19PM (#30220422)
        Amdahl's law is not like Moore's "law". Amdahl's law is an observation of mathematics. You can't ever get around the fact that if you increase the performance of 90% of the instructions in a program, you still have to deal with the other 10%. Even if you increase the performance of 90% of the instructions by 100x or something large, if the other 10% take a long time (like disk access) its going to kill your performance.
        • This is also known as the No Free Lunch Theorem and traces right back to Godel, Turing, and the foundations of all mathematics. You are right, it certainly isn't going anywhere.

        • Re: (Score:3, Informative)

          by jensend (71114)

          You misunderstand what he means by "getting past" Amdahl's Law. That wouldn't involve somehow changing the fact that only speeding up part of a program can only do so much to speed the whole program- it'd involve radically changing the proportion of various algorithms which can be parallelized. For instance, if somebody were to come up with a constructive proof that P [wikipedia.org]=NC [wikipedia.org] that'd certainly do the trick (though most people's guess is that that's false).

          • by rjh (40933)
            Right -- I'm also including such things as discovering new equivalent algorithms that offer better parallelization fractions, too. Theory is great: it offers the potential for both revolutionary (P=NC) and evolutionary ("hey, I wonder if...") improvements.
        • by rjh (40933)

          You're correct when you say that if you parallelize 50% of your algorithm, you still have to deal with the other 50%. However, new theoretical discoveries are made all the time in algorithm design; new theoretical advances may result in an equivalent algorithm that's 90% parallelizable.

          Amdahl's Law is a strong mathematical constraint on the maximum improvement in performance for a particular algorithm. In order to get past Amdahl's Law, use a different algorithm.

          I freely confess to having used "get past A

        • Re: (Score:3, Insightful)

          by bertok (226922)

          Amdahl's law is not like Moore's "law". Amdahl's law is an observation of mathematics. You can't ever get around the fact that if you increase the performance of 90% of the instructions in a program, you still have to deal with the other 10%. Even if you increase the performance of 90% of the instructions by 100x or something large, if the other 10% take a long time (like disk access) its going to kill your performance.

          It makes the implicit assumption that there is a '10%'. The whole point of some pure functional languages is that essentially all of it can be parallelized, 99% or more in some cases. There are programs where the sequential part is 0.001%, if that.

          However, another comeback I've heard is that the "end goal" of parallelization is not necessarily to do the same thing in less time, but to do more in the same time. That is, you increase the "parallelizable" bit while keeping everything else constant. For example

          • by Belial6 (794905)
            Obviously your wrong because of the law of thermodynamics.
            • by bondsbw (888959)

              Obviously your wrong because of the law of thermodynamics.

              Yes, because this has something to do with thermal equilibrium, conservation of energy, perpetual motion machines, or cooling stuff to absolute zero.

              In case you are confused, Wikipedia has a link [wikipedia.org] for that.

              • by Belial6 (794905)
                His wrongness was obviously powered by the the massive wind generated by my sarcasm flying over your head. That's what the "Whoosh" sound you heard was.
    • by lorenlal (164133)

      I guess not, since that's what these folks put together.

      Seriously though, it's my experience that there aren't too many people out there writing much with concurrency and SMP in mind to start with. I'm not sure how they did it, but if they can bring parallelism to another method of programming, then fine by me. The more people get used to utilizing the multiple cores available to them, the better IMHO.

    • Re:Is it just me ? (Score:4, Informative)

      by arevos (659374) on Tuesday November 24, 2009 @06:34PM (#30219832) Homepage

      I'm not sure what you mean by "recursive style", but the biggest commercial users of functional programming languages tend to be companies behind high-traffic websites that need to handle a lot of concurrent requests. Facebook developed their real-time chat application in Erlang, for instance, and Twitter uses Scala to handle its message queue.

      • Re:Is it just me ? (Score:4, Informative)

        by ls671 (1122017) * on Tuesday November 24, 2009 @06:54PM (#30220086) Homepage

        > I'm not sure what you mean by "recursive style",

        Look at Quicksort in Haskell :

        qsort [] = []
        qsort (x:xs) = qsort (filter (= x) xs)

        This is what I mean, no loops, recursion. I used Prolog and ML to solve logic problems and it is pretty handy. Prolog is especially suited for solving logic problems ( "logic programming" ).

        • by ls671 (1122017) *

          Hehe /. screwed the Haskell code, look here for the source :

          http://www.haskell.org/haskellwiki/Introduction#Quicksort_in_Haskell [haskell.org] ;-)))

        • by arevos (659374)

          > I'm not sure what you mean by "recursive style",

          Look at Quicksort in Haskell :

          qsort [] = []
          qsort (x:xs) = qsort (filter (= x) xs)

          This is what I mean, no loops, recursion.

          Well, all functional programming languages use recursion, so "recursive style functional programming languages" is a bit redundant :)

          • Re:Is it just me ? (Score:5, Interesting)

            by DragonWriter (970822) on Tuesday November 24, 2009 @07:16PM (#30220390)

            Well, all functional programming languages use recursion

            Most procedural programming languages use (or at least support) recursion, too. The difference is that (1) pure functional language cannot express certain algorithms with iteration instead of recursion because of the lack of mutable state, and (2) languages that don't support tail call optimization (at least in the trivial case of self-recursive tail calls) generally also don't efficiently support recursion, since recursion results in the accumulation of stack frames. A consequence of #1 and #2 is that pure functional languages almost without exception, as well as many less purely functional languages that designed to support programming in a functional style (e.g., Scheme), feature tail call optimization and have tail-recursive constructs as the most idiomatic way of expressing certain algorithms, whereas languages that aren't functional or designed with functional-style programming in mind, often have an iterative approach as the most idiomatic -- and most efficient -- expression of the same algorithms.

            • Re:Is it just me ? (Score:5, Informative)

              by j1m+5n0w (749199) on Tuesday November 24, 2009 @07:42PM (#30220698) Homepage Journal

              I would add a couple other "must have" features in functional languages:

              * The ability to pass a function as an argument to another function (i.e. higher order functions, like qsort in the C standard library).

              * Support for a points-free programming style in which things can be passed from one function to another without naming them.

              Some other features that perhaps aren't technically required but make functional programming a lot easier:

              * Closures, local function definitions, garbage collection, partial evaluation of function.

              • would add a couple other "must have" features in functional languages:

                * The ability to pass a function as an argument to another function (i.e. higher order functions, like qsort in the C standard library).

                Yeah, I'd agree that first-class functions are a requirement.

                * Support for a points-free programming style in which things can be passed from one function to another without naming them.

                I don't think this is a requirement for a language to be a functional programming language, though its certainly nice t

              • First of all, for a language to be called "functional", you need first-class functions (duh). What this means is that a function is "just another value" - it can be passed around in function arguments and returned, assigned/bound to a variable, and so on. But this also generally means first-class "function literals" - anonymous functions, lambdas, and so on - and for that you need closures. Furthermore, once you get closures and function-typed return values, you have to have some form of automatic memory ma

              • things can be passed from one function to another without naming them.

                Isn't that a rather point-less feature? ;-)

            • by arevos (659374)

              Yes, I realise all that. I was just pointing out that all functional languages use "recursive style", as ls671 puts it.

          • by ls671 (1122017) *

            Almost all programming languages support recursion so recursion is recursively redundant ! ;-))

        • Re: (Score:3, Insightful)

          by harry666t (1062422)

          Except that
          1. Slashdot thought you're writing HTML and ate half of your code
          2. Put it into a module and load in ghci, then try something like "qsort [1..10000]". Watch the slowness.
          3. The efficient implementation in Haskell uses iteration and is 100 times less readable than equivalent code in C.

          I really like Haskell, but this is not the best example of its strenghts.

          • I agree it's not always the best example, but there are four reasons why "qsort [1..10000]" is slow, and you can address three of them without writing something that's "100 times less readable than C".

            The first problem is that naive quicksort on an already-sorted list/array is O(N^2). The fix is to grab an element out of the middle and use that as your pivot. That alone will make qsort far faster. (I tried it.*) The resulting code is, of course, a bit messier.

            Secondly, the qsort function has no type

            • by Pseudonym (62607)

              Right, quicksort is almost always the wrong answer if you're sorting linked lists, as you are here.

              But consider a more complex algorithm, such as bottom-up polyphase merge sort. (WARNING: Untested code follows.)


              sort = mergePhase . generateRuns

              generateRuns [] = []
              generateRuns (x:xs) = split xs x x (x:)
              where
              split [] _ _ _ k = k []
              split (x:xs) minx maxx k

            • Thank you for your explainations. I wasn't aware that the example I've used was the worst case. I think I'd better go educate myself a little more on algorithms and data structures :D

            • by Goaway (82658)

              The fourth reason is that C is doing in-place array swaps.

              And the ability to do that is the whole reason for quicksort. If you do quicksort without doing it in-place, you're just being silly.

              • by j1m+5n0w (749199)

                Quicksort with in place arrays is O(nlogn). Quicksort with lists is O(nlogn). The difference between the two is a constant factor. Not everyone cares enough about constant factor speed increases that they would abandon a programming language because of it, but some do which was the point I was trying to make.

                Using lists just means that quicksort isn't any better than any of the other nlogn sorts, but it's still one of the easiest efficient sorting methods to explain. Mergesort is usually preferred in

                • by Goaway (82658)

                  Not everyone cares enough about constant factor speed increases that they would abandon a programming language because of it

                  First, it's a constant factor in speed, but not in size. Second, nobody said anybody was abandoning any languages because of it, only that you shouldn't go around showing it off as a "quicksort".

      • by ascari (1400977) on Tuesday November 24, 2009 @08:23PM (#30221082)
        "recursive style" Definition: Curse. And then curse again.
      • by syousef (465911)

        Facebook developed their real-time chat application in Erlang

        Do you normally like to make your opponent's point for them? Facebook chat is horrible and buggy in my experience. Messages go missing. It freezes Firefox. It's just plain awful!

    • I admit that a function with no side effects can ease making things thread safe, but are recursive style functional programming languages really used that much in "SMP and concurrency programming" nowadays.

      If by "recursive style functional programming languages" you mean (more or less) functional programming languages that support tail call optimization so that tail calls, including self-recursive calls, are efficient, then the answer is "yes".

      Besides Haskell, Erlang, for instance, is such a language , and

    • by j1m+5n0w (749199)

      I wrote some code in that category but I never envisioned a functional programming language would suit the job. Am I the only one ? ;-))

      Whether it works or not depends on what your task is and how you try to solve it. There are several ways to do concurrency in Haskell, and they're each appropriate in different instances. If you're just calling a lot of pure functions, then "par" and "seq" might be what you need. If you need shared, mutable state, then STM might be a good approach. If you want to do mes

    • Re:Is it just me ? (Score:4, Interesting)

      by Hurricane78 (562437) <deleted.slashdot@org> on Tuesday November 24, 2009 @07:40PM (#30220672)

      The idea is that you can split up the program in parallel tasks in a fully automated way. If you as a programmer even have to think about parallelizing, I’m sorry, but then your compiler is “doin’ it wrong” and your languages is from the stone age. ^^
      An a bonus, when you can completely rely on a function with the same input producing the same output, you can also throw caching in there algorithmically (where required, on-demand, whatever you wish)
      But bla... that is all just the stuff on the surface. Like explaining the pointlessness of “metaprogramming” when there stops being a difference between data and code.

      I find the most amazing thing about Haskell as it is today, is that things that need the addition of new constructs to the language and a big change in the compiler, are just your normal library in Haskell. It can look like a whole new language. But it’s just a library. And that is amazing!
      Then, when you get to the GHC extensions, things that are as much on the forefront of Informatics science as the LHC on physics, with everybody else copying it years later... Sorry, but if you like elegance in programming, ...I just have no words for it...

      The thing is, that it’s crazy hard to learn. Which is not a fault in language design. Because it’s very elegant. It’s simply the fact that it is so far ahead of anything in your everyday language. You won’t expect to sit in a spaceship and drive it like your car too, would you? Or program the LHC like a VCR.

      Yes, I am a fan. Even if I sometimes hate it for being so damn smart compared to me the normal programmer. But I became so much a better programmer in all other languages, it’s crazy.

      It’s simply a completely different class of skill. And that is why one should learn Haskell. Fuck the “Oh, we’re now only coding in Haskell” attitude. When you really understand the ideas behind it, every language becomes Haskell. And you can write practically bug-free programs of...

      Aaah, what am I saying. <John Cleese>Oh it’s driving me mad... MAD!</John Cleese> *slams cleaver into the table*
      *head developer comes in*
      Head developer: Easy, Mungo, easy... Mungo... *clutches his head in agony* the war wound!... the wound... the wouuund...
      Manager: This is the end! The end! Aaargh!! *stabs himself with a steel lambda sculpture*

      • by microbox (704317)
        The idea is that you can split up the program in parallel tasks in a fully automated way. If you as a programmer even have to think about parallelizing, I’m sorry, but then your compiler is “doin’ it wrong” and your languages is from the stone age.

        This is not true in high-performance computing. If you don't want your CPU spinning its wheels on locks and cache misses, then you need to go quite low level -- and really think about the data structures you're using, and how to paralli
        • by swillden (191260)

          The idea is that you can split up the program in parallel tasks in a fully automated way. If you as a programmer even have to think about parallelizing, I’m sorry, but then your compiler is “doin’ it wrong” and your languages is from the stone age.

          This is not true in high-performance computing. If you don't want your CPU spinning its wheels on locks and cache misses, then you need to go quite low level -- and really think about the data structures you're using, and how to parallise tasks with next to no interthread communication.

          The point of functional programming is that you don't have to think about locks or how to manage contention on your data structures because there is no such thing such thing as mutable data. And if you're just letting the compiler do the parallelization, you don't have to think about interthread communication, either, because the compiler will automatically find the places where function calls can be parallelized with zero interthread communication. Of course, if you structure your code so that it's inher

          • Of course, functional code ultimately gets translated to imperative code, because that's what our processors do. And that translation is being done by a program, which means it will never match the best that a talented human can do, in much the same way that hand-coded assembler, produced by a developer who knows the processor inside and out, will always beat the best-optimizing C compiler.

            If you are thinking of the functional effects of the code then it makes sense to consider it as instructions in an impe

          • by microbox (704317)
            The point of functional programming is that you don't have to think about locks or how to manage contention on your data structures because there is no such thing such thing as mutable data.

            Great, now you get a whole bunch of cache misses, and your CPU is sitting at 10%.
            • by swillden (191260)

              The point of functional programming is that you don't have to think about locks or how to manage contention on your data structures because there is no such thing such thing as mutable data. Great, now you get a whole bunch of cache misses, and your CPU is sitting at 10%.

              Actually, cache utilization by functional programs tends to be excellent. The compiler optimizes most of the duplication of data away in tight loops, so where it matters for performance, data actually is modified in place.

              Really, you should learn something about functional programming before dismissing it.

        • My argument is, that you can put this thoughts into general transformational formulas, and put them in the compiler. :)

          I might be wrong, but at least I will try and find out / prove it. :)

  • by AnotherShep (599837) on Tuesday November 24, 2009 @06:06PM (#30219482)
    And all three of its users are overjoyed.
    • Yeah, and you may thank them when attached to a heart-lung machine that was NOT written in C/C++ instead, because B. Curry’s gonna kick yo ass with that attitude! ;P

  • by migla (1099771) on Tuesday November 24, 2009 @06:13PM (#30219556)

    "The current committee is still discussing how to go about finding a new committee"

    A fitting name for this Haskell programming language might have been "Python" hadn't it all ready been taken.

  • by nweaver (113078) on Tuesday November 24, 2009 @06:13PM (#30219560) Homepage

    So I don't think I'll look at the article until I actually need to program in Haskel....

    • Re: (Score:2, Redundant)

      by AlgorithMan (937244)
      you could do a lazy evaluation of TFA...
    • But learning Haskell is only in a small part about writing Haskell. It’s mainly about getting to the next level in programming skill. And becoming much better in every language.

      Besides: Want to earn some big money for doing actual “mission-critical” software, that lives depend on, and that really means something? There’s no way to do serious business like that without Haskell, and not go crazy. ^^

      So there you go. If you want to continue doing basically a front-end to a database on co

  • by gestalt_n_pepper (991155) on Tuesday November 24, 2009 @06:15PM (#30219586)

    Haskell! Haskell! Every geek's favorite mental masturbation toy!

  • Haskell can't compete with my implementation of the pure lambda calculus!
    screw all that syntactic sugar, lambda expressions ftw!
  • NoNPlusKPatterns... Seems that they're finally banned.

    http://www.mail-archive.com/haskell@haskell.org/msg01261.html [mail-archive.com]

  • The revision to which I look forward is the hack that causes it to make very impolite comments to Mrs. Cleaver.

  • Can it; I think not. It's incredibly cool to write qsort in just 2 lines of code, but it's incredibly slow.

    • by j1m+5n0w (749199)

      Rather than type what I wrote in a similar thread again, I'll just link to it here [slashdot.org].

      Summary: qsort [1..10000] is slow not so much because it isn't an in-place sort, but because naive quicksort on an already-sorted list is O(N^2) rather than O(NlogN). If you use an element from the middle of the list as the pivot instead of the first element, it performs much better.

      As the other reply already stated, you can do in-place quicksort in Haskell by using the ST monad, if that's really what you want.

It is wrong always, everywhere and for everyone to believe anything upon insufficient evidence. - W. K. Clifford, British philosopher, circa 1876

Working...