Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT

Erik Meijer: The Curse of the Excluded Middle 237

CowboyRobot (671517) writes "Erik Meijer, known for his contributions to Haskell, C#, Visual Basic, Hack, and LINQ, has an article at the ACM in which he argues that 'Mostly functional' programming does not work. 'The idea of "mostly functional programming" is unfeasible. It is impossible to make imperative programming languages safer by only partially removing implicit side effects. Leaving one kind of effect is often enough to simulate the very effect you just tried to remove. On the other hand, allowing effects to be "forgotten" in a pure language also causes mayhem in its own way. Unfortunately, there is no golden middle, and we are faced with a classic dichotomy: the curse of the excluded middle, which presents the choice of either (a) trying to tame effects using purity annotations, yet fully embracing the fact that your code is still fundamentally effectful; or (b) fully embracing purity by making all effects explicit in the type system and being pragmatic by introducing nonfunctions such as unsafePerformIO. The examples shown here are meant to convince language designers and developers to jump through the mirror and start looking more seriously at fundamentalist functional programming.'"
This discussion has been archived. No new comments can be posted.

Erik Meijer: The Curse of the Excluded Middle

Comments Filter:
  • by smoothnorman ( 1670542 ) on Sunday April 27, 2014 @08:27PM (#46855609)
    to interact with an imperfect world one needs monads. to have monads is to compromise functional programming. ipso-facto-quod-splut: i always did rarther fancy Fortran. (hsst: don't tell anyone, but Forth is the -only- way to go, (and by 'go' i don't mean "Go" (or "Dart")))
  • by Kjella ( 173770 ) on Sunday April 27, 2014 @09:18PM (#46855851) Homepage

    My impression of "pure" functional programming is that it's roughly like having only static classes and no static members in OOP. Basically if you can have all the information you need "in flight" with you, then it can do all sorts of neat parallelization and optimization tricks because there's no state that makes the ordering important. I guess that's great if you're running some sort of scientific simulation where all the input is set in the model and you expect a result set out at the very end. But I don't find that part hard, the hard part about OOP is when the state throws you a curve ball. You try to write to a database record but it's not there anymore or the user removed the CD from the drive or the database is full or the network connection was lost and now what? It's handling all the contingencies that is difficult. I guess if the problem is the performance of your side effect free code, functional programming may be the answer. But it's not what most developers deal with.

  • by msobkow ( 48369 ) on Sunday April 27, 2014 @09:44PM (#46855961) Homepage Journal

    I agree completely, and have experienced this problem with Erlang. We got most of a complex system built at my last job over 2 years using Erlang for the servers and data I/O services.

    Then we came to the scheduling algorithm, which had originally been prototyped with Visual Basic. It did the job, and had for many years.

    But have you ever tried to express an n-length array and process it in a functional language?

    In the end we had to cancel the project and blame the fellow who'd made the decision to use Erlang. Maybe if he were still with the company, he'd have been able to code it (he was an Erlang "expert".) But he'd jumped ship two years before it was due, so we'll never know if even a self-proclaimed "expert" could have made it work.

    I couldn't. I'd managed to shoehorn every other piece of functionality into the system, but mapping that simple array-based algorithm to a functional language proved impossible.

  • Bad Summary. (Score:5, Interesting)

    by thestuckmud ( 955767 ) on Sunday April 27, 2014 @09:59PM (#46855999)

    The synopsis completely misses the qualification, made in the first sentence, that TFA is discussing "concurrency, parallelism (manycore), and, of course, Big Data". Purely functional programming eliminates some significant issues in this type of programming (while introducing its own set of limitations). Meijer's point is that mostly functional programming is not really better than imperative here

    For other types of programming, mostly functional style (using multi-paradigm languages) can be very nice. At least that's my position.

  • by rabtech ( 223758 ) on Sunday April 27, 2014 @11:23PM (#46856369) Homepage

    Or, perhaps, to acknowledge that it's very hard to do anything useful without side effects.

    You can write beautiful, elegant, purely functional code, as long as it doesn't have to touch a storage system, a network, or a user. But, hey, other than that, it's great!

    This is a huge misconception about functional programming, one that I used to have myself.

    With a functional programming language, you can have side effects, you are just forced to be explicit about those side effects with specific language features in specific places.

    Basically functional programming requires you to "opt-in" to side effects only where necessary.

    Traditional imperative programming requires you to "opt-out" by taking huge steps to enforce immutability, generating mountains of code to accomplish any task because the compiler doesn't help you.

  • by Animats ( 122034 ) on Sunday April 27, 2014 @11:42PM (#46856427) Homepage

    It's frustrating. Functional programming is painful when you actually have to do something, not just compute some result. But the real problem is older. We never got concurrency right in imperative languages.

    Classic pthread-type concurrency suffers from the problem that the language has no idea what's locked by a lock. This problem is in C, wasn't fixed in C++, and isn't even fixed properly in Go. It was addressed more seriously in Modula and Ada, where the language knew which variables where shared and which were not. The Ada rendezvous approach was too limiting for anything otther than hard real-time, but it was on the right track.

    Java addressed this with synchronized objects. This was a step in the right direction. The basic concept of a synchronized object is that, when executing a method of the object, nothing else can affect the state of the object. Java's synchronized objects don't quite get that right - you can call out of an object, then back into it, from within the same thread. This can break the object's invariant, in that the callback function is entered while the object is not in its stable, nobody-inside state. This is a classic cause of trouble in GUI systems, which involve lots of objects calling each other through dynamically changing collections. (If some unusual order of clicks crashes a program, there's a good chance the bug is of this type.)

    The inside/outside issue for state protected by locks is a big one. This also comes up when a thread blocks. Many programs have sections where a thread unlocks a lock, blocks, then relocks the lock. This constitutes control leaving the block, but the compiler doesn't understand this. There's no syntax that says "I am now leaving this object to wait", with the language checks to insure that no internal object state gets passed to the code outside the object. The Spec# group at Microsoft (Spec# is a proof of correctness project using a form of C#) attacked this problem, and came up with a solution of sorts, but it never went mainstream. It's hard to fix this with a language bolt-on.

    Objects ought to be either immutable, synchronized, or part of something that's synchronized. Then you're safe from low level race conditions. (You can still deadlock. However, deadlock bugs tend to be detectable and repeatable, unlike race condition bugs. So they get caught and fixed.) if this is built into the language, the compiler can check and optimize. Compilers are good at catching things like a local variable being passed to something that might save a reference to it and mess with it concurrently. Humans suck at that. Machines are good at global analysis of big data.

    I had great hopes that the Go crowd would have a solution. They claim to, but there's a lot of hand-waving. They claim "share by communicating, not by sharing memory", but the examples in "Effective Go" all share memory. It's also really easy to share memory between goroutines in Go inadvertantly, because slices and dicts are reference objects. Pass them through a pipe and you've shared data and can have race conditions. The problem is bad enough that Google AppEngine limits Go programs to one thread.

    Mixed functional/imperative programming has all these problems, plus the illusion that the problem has been solved. It hasn't.

  • by msobkow ( 48369 ) on Monday April 28, 2014 @12:10AM (#46856525) Homepage Journal

    No offense taken. I don't claim to be an Erlang expert; I hadn't even heard of the language before this project. None of the team members had worked with it. The only one who'd worked with it was the guy who architected and prototyped the system. As soon as he was done the prototype, he didn't renew the contract and buggered off.

    But we had done "too much" to switch to a language we could all agree on. Oh hell, no. We had to keep on using that crap because somebody had Made A Decision and wouldn't backtrack and Lose Money.

    In the end, they lost 4-5 times as much money when we couldn't make it go. And it serves them right -- sticking with a bad decision just because you've got an investment in it is stupid when everyone is telling you it's a bad decision and a bad investment. You need to listen to the TEAM DOING THE WORK, not an "expert" who buggered off before the real work started.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...