Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Books Media Book Reviews IT Technology

Learning Functional Programming through Multimedia 200

ijones writes "Andrew Cooke recently reviewed for Slashdot Purely Functional Data Structures, which is on my book shelf next to The Haskell School of Expression: Learning Functional Programming through Multimedia by Paul Hudak from the Yale Haskell Group. In his review, Cooke presented an overview of some available functional programming languages, such as OCaml, SML, and of course Haskell. Havoc Pennington once called Haskell 'the least-broken programming language available today.' Haskell is a purely functional, lazy, statically typed programming language. You can read more about Haskell itself here." Read on for ijones' review of The Haskell School of Expression.
The Haskell School of Expression: Learning Functional Programming through Multimedia
author Paul Hudak
pages 363
publisher Cambridge University Press
rating 9
reviewer Isaac Jones
ISBN 0521644089
summary Learn to program in a functional style in Haskell by implementing graphics, music, and robots simulations.

As the title implies, The Haskell School of Expression introduces functional programming through the Haskell programming language and through the use of graphics and music. It serves as an effective introduction to both the language and the concepts behind functional programming. This text was published in 2000, but since Haskell 98 is the current standard, this is still a very relevant book.

Haskell's standardization process gives us a window into two different facets of the community: Haskell is designed to be both a stable, standardized language (called Haskell 98), and a platform for experimentation in cutting-edge programming language research. So though we have a standard from 1998, the implementations (both compilers and interpreters) are continually evolving to implement new, experimental features which may or may not make it into the next standard.

For instance, the Glasgow Haskell Compiler has implemented a meta-programming environment called Template Haskell. Haskell is also easy to extend in directions that don't change the language itself, through the use of Embedded Domain-Specific Languages (EDSLs) such as WASH for web authoring, Parsec for parsing, and Dance (more of Paul Hudak's work) for controlling humanoid robots.

Before we get too far, I should offer a disclaimer: The Haskell community is rather small, and if you scour the net, you may find conversations between myself and Paul Hudak or folks in his research group, since I use some of their software. That said, I don't work directly with Hudak or his research group.

In fact, the small size of the Haskell community is a useful feature. It is very easy to get involved, and folks are always willing to help newbies learn, since we love sharing what we know. You may even find that if you post a question about an exercise in The Haskell School of Expression , you'll get a reply from the author himself.

I consider this book to be written in a "tutorial" style. It walks the reader through the building of applications, but doesn't skimp on the concepts (indeed, the chapters are meant to alternate between "concepts" and "applications"). In some ways, the code examples make it a little difficult to jump around, since you are expected to build upon previous code. The web site provides code, however, so you can always grab that and use it to fill in the missing pieces.

For readers who wish to use this book as a tutorial, and to implement all of the examples (which is highly recommended), I suggest that you grab the Hugs interpreter and read the User's Guide while you're reading the first few chapters of The Haskell School of Expression. Hugs is very portable, free, and easy to use. It also has an interface with Emacs. Unfortunately, some of the example code has suffered from bit-rot, and certain things don't work out-of-the-box for X11-based systems. The bit-rot can be solved by using the "November 2002" version of Hugs. This is all explained on SOE's web page.

The Haskell School of Expression should be very effective for programmers who have experience in more traditional languages, and programmers with a Lisp background can probably move quickly through some of the early material. If you've never learned a functional language, I highly recommend Haskell: Since Haskell is purely functional (unlike Lisp), it will more or less prevent you from "cheating" by reverting to a non-functional style. In fact, if you've never really looked at functional programming languages, it may surprise you to learn that Haskell has no looping constructs or destructive assignment (that is, no x = x + 1). All of the tasks that you would accomplish through the use of loops are accomplished instead through recursion, or through higher-level abstractions upon recursion.

Since I was already comfortable with recursion when I started this book, it is hard for me to gauge how a reader who has never encountered recursion would find this book's explanation of the concept. The Haskell School of Expression introduces recursion early on, in section 1.4. It is used in examples throughout the book, and if you follow along with these examples, you will most certainly be using it a lot. The introduction seems natural enough to me, but I note that Hudak does not give the reader any extra insight or tricks to help them along. Not to worry, though; recursion is very natural in Haskell and the reader may not even notice that they are doing something a little tricky.

The use of multimedia was a lot of fun for me, and should quickly dispel the myth that IO is difficult in Haskell. For instance, Hudak has the reader drawing fractals by page 44, and throughout the book, the reader will be drawing shapes, playing music, and controlling animated robots.

Any book on Haskell must be appraised for its explanation of monads in general and IO specifically. Monads are a purely functional way to elegantly carry state across several computations (rather than passing state explicitly as a parameter to each function). They are a common stumbling block in learning Haskell, though in my opinion, their difficulty is over-hyped.

Since input and output cause side-effects, they are not purely functional, and don't fit nicely into a function-call and recursion structure. Haskell has therefore evolved a way to deal safely and logically with IO through the use of monads, which encapsulate mutable state. In order to perform IO in Haskell, one must use monads, but not necessarily understand them.

Some people find monads confusing; I've even heard a joke that you need a Ph.D. in computer science in order to perform IO in Haskell. This is clearly not true, and this book takes an approach which I whole-heartedly agree with. It gets the reader using monads and IO in chapter 3 without explaining them deeply until chapters 16 (IO) and 18 (monads). By the time you get there, if you have heard that monads are confusing, you might be inclined to say "how is this different from what we've been doing all along?" Over all, I was pleased with the explanation of monads, especially state monads in chapter 18, but I felt that the reader is not given enough exercises where they implement their own monads.

If you're worried that drawing shapes and playing music will not appeal to your mathematic side, you will be pleased by the focus on algebraic reasoning for shapes (section 8.3) and music (section 21.2), and a chapter on proof by induction (chapter 11).

After reading this book you will be prepared to take either of the two paths that Haskell is designed for: You can start writing useful and elegant tools, or you can dig into the fascinating programming language research going on. You will be prepared to approach arrows, a newer addition to Haskell which, like monads, have a deep relationship to category theory. Arrows are used extensively in some of the Yale Haskell group's recent work. You will see a lot of shared concepts between the animation in The Haskell School of Expression and Yale's "Functional Reactive Programming" framework, Yampa. If you like little languages, you'll appreciate how useful Haskell is for embedded domain-specific languages. It may be even more useful now that Template Haskell is in the works. Andrew Cooke described Purely Functional Data Structures as a great second book on functional programming. In my opinion, The Haskell School of Expression is the great first book you're looking for.


You can purchase Learning Functional Programming through Multimedia from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

Learning Functional Programming through Multimedia

Comments Filter:
  • by stuffduff ( 681819 ) on Tuesday March 16, 2004 @02:23PM (#8580432) Journal
    When I first started learning new languages I used to rewrite pong in them. It was very easy to 'see' if the code did what it should or didn't. That kind of feedback can really speed up the learning curve. I'm glad to see that the method hasn't been entirely lost.

    Today, if you don't have enough flashy multimedia to attract the user to stay and look at what you have to say, you never even get your foot in the door. Chances are that someone who has taken the time to learn to both use the technology and apply it in a meaningful way probably has something to say.

    With a generation of multimedia oriented programmers available I expect to see a much higher degree of interactivity in many different areas, from thing like mouse gestures to multi-dimensional navigation metaphores where we can simultaniously demonstrate our interests and our abilities so that we can arrive at the appropriate 'step' in whatever process we are trying to achieve.

  • Audience (Score:5, Insightful)

    by bobthemuse ( 574400 ) on Tuesday March 16, 2004 @02:25PM (#8580453)
    Since I was already comfortable with recursion when I started this book, it is hard for me to gauge how a reader who has never encountered recursion would find this book's explanation of the concept.

    Who is the target audience for this book? I would assume programmers, of at least moderate experience. It's not like there are thousands of script/VB kiddies jumping over themselves to learn functional languages. Makes me wonder, how many semi-experienced programmers are there out there who aren't comfortable with using/understanding recursive functions?
  • by ameoba ( 173803 ) on Tuesday March 16, 2004 @02:32PM (#8580521)
    I think these guys [oreilly.com] have you covered.
  • Re:New vs. Old (Score:5, Insightful)

    by Abcd1234 ( 188840 ) on Tuesday March 16, 2004 @02:47PM (#8580656) Homepage
    Meh. A decent computing science course will have an extensive class in data structures, usually implemented in C, in which you'll likely cover at least linked lists and trees, followed by a second, more extensive course on general algorithms, in which you'll cover heaps and other more advanced data structures. If it doesn't, it ain't worth enrolling in. *shrug*

    Of course, after you've learned *how* a linked list is implemented, you should never have to roll your own. And if you do find yourself rolling your own, you should seriously question *why* before continuing, as there are many high quality, well tested implementations already floating around (for example, glib).
  • Re:New vs. Old (Score:3, Insightful)

    by ThosLives ( 686517 ) on Tuesday March 16, 2004 @02:56PM (#8580773) Journal
    Ah, I would say that "writing a program" by putting together other people's building blocks is not programming but code assembly. I would actually say that most "programmers" out there really don't know how to program, and that's why we have lots of the issues present today. It does take a clarification on how literal you want to be with the verb "to program" because people who stack up components are programming in the strictest sense. However, I fear that few and far between are the folks who can code a full GUI in 32 K of RAM (GEOS for the Commodore64).

    I would wager that most of the "true" programmers are those in imbedded systems and the like who actually have to generate the building blocks and frameworks of any language. The rest are something else - not to nock their contributions to the application world - but I think that some distinction ought to be made.

  • Re:Here We Again (Score:5, Insightful)

    by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday March 16, 2004 @02:58PM (#8580789) Homepage
    Thus, there's a MASSIVE performance loss when a functional programming language is executed on any of the existing processors.
    There's also a massive performance loss with many imperative languages when compared to C. Impure functional languages like OCaml or Scheme can give programs that run about half as fast as C when using a decent compiler. This compares favourably to Java or Perl (among others).
    The day someone actually invents a function processor, we could start promoting these fringe langauges.
    There have been Lisp Machines - okay, Lisp isn't a purely functional language but it is high-level and some of the same arguments apply. However, your point of view is a bit odd. There should be a processor with features mapping directly to language constructs before you can start using that language? People have tried this in the 1980s and earlier, with processors optimized for running Modula-2 or similar, but on the whole it turned out to be better to make general-purpose CPUs and write compilers for them. It's not as if current CPUs are a particularly good match for C's abstractions; a language with built-in lightweight parallellism could work well with hyperthreading, for example. In any case, even if the language and CPU are horribly mismatched and everything runs a hundred times slower, that could easily be fast enough for the application. CPUs are getting cheaper but programmers, on the whole, are not.

    Do you also discourage the use of Perl, or shell scripts, or Tcl, or Java? Or is it just functional languages that you don't like because they do not map to existing processors?

  • Re:New vs. Old (Score:4, Insightful)

    by pHDNgell ( 410691 ) on Tuesday March 16, 2004 @03:01PM (#8580820)
    Ah, I would say that "writing a program" by putting together other people's building blocks is not programming but code assembly. I would actually say that most "programmers" out there really don't know how to program, and that's why we have lots of the issues present today.

    Um, no. Writing a program is always assembling building blocks unless you always start by writing an assembler for your target hardware.

    The good programmers are the ones who assemble the correct building blocks the right way. The people who reinvent the linked list for every project are the ones who cause us the most problems (and yes, I've reinvented many linked lists in the past).

    Once you break free from the mentality that you must always make your own malloc(), printf(), hashtables, trees, linked lists, etc... you can move on to higher level issues like the actual application you're working on.
  • Re:Here We Again (Score:5, Insightful)

    by talleyrand ( 318969 ) on Tuesday March 16, 2004 @03:05PM (#8580859) Homepage
    I haven't got the exact figures, but I reckon 99% of all code written out there must be written in Imperative (sometimes pseudo OO) languages. There must be SOME reason functional languages are not so popular.
    Hmmmmm
    SELECT
    *
    FROM
    PROGRAMMING_TYPES PT
    WHERE
    PT.name NOT IN ('functional', 'imperative')

    >>> DECLARATIVE
    (1 row(s) affected)
    Yeah, I guess there's only two types of programming languages out there and clearly the previous code is only used in research/academic environments.
  • Re:Here We Again (Score:2, Insightful)

    by Anonymous Coward on Tuesday March 16, 2004 @03:09PM (#8580905)
    Sorry, but here you are completely mistaken (and I am a bit amazed how this post got ranked so high). By using a reasonably advanced compiler for a functional language, and knowing how to write good functional code (highly nontrivial issue), one can typically get 50% - 150% of the performance of GNU GCC. Just look at the figures in the "Great Compiler Language Shootout".

    Yes, it is easy to write slow code in functional languages. Especially for beginners who do not know or care about the fine print. But those one usually should not ask for their judgement, right?

    When I saw LISP for the first time, I also could not imagine how something like that could ever be compiled to efficient machine code. By now, I do know, and I also know how to use the benefits of functional programming to implement freaky performance tricks virtually no imperative programmer would ever think about.
  • Re:Languages (Score:5, Insightful)

    by pHDNgell ( 410691 ) on Tuesday March 16, 2004 @03:31PM (#8581104)
    Do people think it's a good thing for a C++/Java/.NET programmer to go back to the drawing board for a few months and learn stuff like functional programming?

    Absolutely. It's a tragedy to go through life as a programmer without knowing FP. The more you learn about programming in general, the better you will be a programming.

    I thought about coming up with a syllabus for myself of C, Haskell, LISP and Perl

    I always recommend against perl. Very few people understand perl (no matter what they tell you), and I've yet to see a significant perl program without several significant bugs because of lack of proper exception handling or ambiguity (stuff like if($var) where $var can be false on what would otherwise be considered valid input).

    I'd definitely recommend python instead if you want to learn some scripting (maybe ruby if you like stuff that looks like perl, but has a more reasonable philosophy).

    C, like any other assembler, is OK to learn, but shouldn't be used for much of anything except to write extentions in high level languages.

    Haskell is good. OCaml is good. Scheme is a good lisp derivative that's small enough to learn pretty easily.

    You might want to add smalltalk and/or objective C in there. Smalltalk is pure OO (the OO version of Haskell, if you will). Objective C is C with smalltalk-style OO. When combined with the NeXTSTEP frameworks, you can learn a lot of very useful patterns.

    A big part of functional programming is programming without side effects. Learning to program without side effects can greatly help you create more stable applications.

  • Re:Here We Again (Score:3, Insightful)

    by dubious9 ( 580994 ) on Tuesday March 16, 2004 @03:54PM (#8581354) Journal
    "Thus, there's a MASSIVE performance loss when a functional programming language is executed on any of the existing processors. Because the compilers can't think and optimise the code to best fit the imperative model. Where as the human being s can. That's why we should stick to imperative programming languages."

    If this discussion was 25 years ago it would have been "OO languages? They are only good it theory. Sure you can easily write programs in them, but they abstract over how the program is executed. Besides, OO is slow!!"

    But through the use of OO we have seen that programming at a higher level is better. Compilers get better all the time. Moore's Law has kept. There are few places in a program where you really need speed. There are notable exceptions (graphics, higher math, real time) but there is no reason to assume compiler theory won't advance to the level that we can write games and OS'es in Java.

    The speed argument becomes much less of a factor when you can be more expressive with a language.

    "Because the compilers can't think and optimise the code to best fit the imperative model. Where as the human being s can."

    Err... if human being s (sic) can, then they can abstract and automate. Computers don't "think" at all, they are only good at what we've figured out at automating. If we can do it, there's no reason why computers can't either. In short if a programming paradigm works, then it makes sense to invest in it. Optimization will come later.
  • Re:Here We Again (Score:4, Insightful)

    by jovlinger ( 55075 ) on Tuesday March 16, 2004 @03:55PM (#8581367) Homepage
    Optimizing Haskell by choice unboxings and strictness annotations is against the whole point of the language (*). More imporatantly, it is close to impossible for anyone but the compiler-writer to get right.

    Predicting how a lazy program will perform is hard, and figuring out where it hurts is even harder. This is in part due to the massive restructurings the compiler does. One small annotation may be sufficient for the compiler to infer a function's strictness. Knowing where to put the annontation, tho, is nigh guesswork. Then I refactor the function, and there goes my strictness again.

    But, this _is_ preferable to writing in C, I'll agree with you there.

    (*) However, I think the worker-wrapper transformation may be the most beautiful optimization I've ever seen.
  • Re:Here We Again (Score:3, Insightful)

    by JohnsonJohnson ( 524590 ) on Tuesday March 16, 2004 @04:28PM (#8581721)

    I haven't got the exact figures, but I reckon 99% of all code written out there must be written in Imperative (sometimes pseudo OO) languages. There must be SOME reason functional languages are not so popular.

    This was also true of procedural languages versus assembly prior to the mid '70's, true of COBOL versus other languages prior to the early '80's, true of C versus OO languages prior to the mid '90's...

    It's not just for mutual funds that the past is a poor predictor of the future.

    Functional language are only good in theory. Sure, you can easily write programs in them, but they abstract over how the program is executed. And the programs are going to be executed in an imperative manner; machine code is imperative, remember?

    Although the register machine at the heart of most people's computer operates explicitly on state it does not mean it maps naturally to an imperative style of programming. For starters, the CPU is generally tightly constrained with respect to data types and the number of state values of which it can keep track, versus typical imperative (especially in imperative OO languages) code which assume infinite memory resources and types of arbitrary complexity. Secondly, for FPGA type computing, functional style programming is often a better fit.

    Thus, there's a MASSIVE performance loss when a functional programming language is executed on any of the existing processors. Because the compilers can't think and optimise the code to best fit the imperative model. Where as the human being s can. That's why we should stick to imperative programming languages.

    As is oft pointed out, there's a performance penalty to any newer/less widely deployed technology when adapted to older hardware. Over time that disappears (reiterate past as a poor predictor comment here). Would you hire a programmer who'd spend 99% of their time writing and optimising machine code, or a programmer who intelligently evaluates the available computing/development platforms (probably ending up with C++ or Java) and writing a relatively modern OO style program to run your new mission critical $7 billion production center? Note that the more details the programmer is responsible of keeping track of (down to making sure they remember the binary representations of the 2-300 instructions a modern processor can execute) the more likely they are to introduce bugs/errors.

    The day someone actually invents a function processor, we could start promoting these fringe langauges. Till then, let's keep Haskell as part of CS811 Thank you for listening. That's the end of my rant.
  • Re:Here We Again (Score:5, Insightful)

    by dasmegabyte ( 267018 ) <das@OHNOWHATSTHISdasmegabyte.org> on Tuesday March 16, 2004 @04:43PM (#8581890) Homepage Journal
    Another thing to remember is that while processor speeds double fairly often, programmer speeds are a constant given a distinct set of tools. If picking a language that executes slower allows a programmer to write more software in a given period of time, then that language is superior choice for all but the most time sensitive applications.

    In other words, I don't want to waste time fucking around with pointers when I could be working on something more pertinant to the task at hand. I don't care HOW my matrix gets sorted. I just care that it does. If I waste some cycles, so what? This computer performs 53 MILLION in between each monitor refresh. If writing in a more abstract language permits me to get twice as much done per day of programming (and in my experience, it's more like 5-10 times), I'm willing wait.

    Besides, while computers may no be able to "think," code optimization is not reasoning as much as it is pattern based. In fact, modern refactoring tools are better optimizers than most programmers, because they know more of these patterns. And unlike human programmers, refactoring tools aren't tempted to glaze over modern day essentials like bounds or type checking, resulting in fewer bugs and better security.
  • by scrytch ( 9198 ) <chuck@myrealbox.com> on Tuesday March 16, 2004 @09:42PM (#8584891)
    > Write the same function in Haskell, in linear time

    Using unboxed types I could probably transliterate that code, but I'm kind of rusty in Haskell. I just find it more interesting that I could probably crash your process by passing -1 for len.
  • by alienmole ( 15522 ) on Wednesday March 17, 2004 @10:17AM (#8588040)
    I don't think the OP's solution was the most transparent, but that's not Haskell's fault.

    It's simply wrong to assume that because you see code you don't understand when you're looking at an advanced language that you don't know, that those languages "are simply mind-boggling, and no - that's not a good thing".

    What those languages are, are different than what you're used to. There's a reason for that - what you're used to is largely an ad-hoc collection of crap that evolved before people understood what computers were really about. Now, we have good models for things like abstraction and control flow and structure in languages, but unfortunately, languages like C and Java and C# don't even begin to take advantage of these models.

    Really, you shouldn't focus on learning FP languages. Focus instead on learning basic computing principles, as you'll find in a book like SICP. Once you understand basic computing principles, you'll understand why and how functional languages make sense, and why ultimately, the crappy languages we have today will evolve in that direction. You already see it happening, in fact, as various languages slowly absorb academic features that have been around for decades - it just happens very slowly. You can get a jump on the next 30 years of language evolution by learning something about the underlying principles, right now.

    Haskell is just one possible expression of functional principles, and it's one that emphasizes purity and syntactic conciseness. That doesn't always result in the most friendly code. Neither of these constraints are fundamental to functional languages in general. You might find ML, or Scheme, or Erlang, or OCaml more to your taste. But if you ignore FP because you think it's mind-boggling, you're simply admitting that you're not serious about programming.

After an instrument has been assembled, extra components will be found on the bench.

Working...