Please create an account to participate in the Slashdot moderation system


Forgot your password?
Take advantage of Black Friday with 15% off sitewide with coupon code "BLACKFRIDAY" on Slashdot Deals (some exclusions apply)". ×
Programming IT

Erik Meijer: The Curse of the Excluded Middle 237

CowboyRobot (671517) writes "Erik Meijer, known for his contributions to Haskell, C#, Visual Basic, Hack, and LINQ, has an article at the ACM in which he argues that 'Mostly functional' programming does not work. 'The idea of "mostly functional programming" is unfeasible. It is impossible to make imperative programming languages safer by only partially removing implicit side effects. Leaving one kind of effect is often enough to simulate the very effect you just tried to remove. On the other hand, allowing effects to be "forgotten" in a pure language also causes mayhem in its own way. Unfortunately, there is no golden middle, and we are faced with a classic dichotomy: the curse of the excluded middle, which presents the choice of either (a) trying to tame effects using purity annotations, yet fully embracing the fact that your code is still fundamentally effectful; or (b) fully embracing purity by making all effects explicit in the type system and being pragmatic by introducing nonfunctions such as unsafePerformIO. The examples shown here are meant to convince language designers and developers to jump through the mirror and start looking more seriously at fundamentalist functional programming.'"
This discussion has been archived. No new comments can be posted.

Erik Meijer: The Curse of the Excluded Middle

Comments Filter:
  • by Anonymous Coward on Sunday April 27, 2014 @09:02PM (#46856017)
    I'll break it down into Retardese for you.

    Side effects are things that change the state of the program. We don't want to store the state of the program because it gives it a memory. If it is not memoryless, then it is difficult to reason about. For instance, the equation f(x)=2x+1 is memoryless, because it does not matter what was observed the last time you observed f(x). As such, we can reason very easily with this function, and as long as we always supply the same input, we always get the same output.

    Now think what would happen if we have g(x)=h(x)*x+1, where h(x) returns the previous value that it was supplied with (assume that it returns 0 on the first call if you like). That complicates things greatly, because we now have to consider everything that has been called before. You evaluate g(10) then g(5) to get 51, then reset the environment and evaluate g(5) then g(5) to get 26. That means it's no longer a function and cannot be reasoned like it is one. This means that you cannot formally prove the code correct, but you have to use a debugger to hunt down things like some sort of code monkey. It's intolerable!
  • Re:Gobbledigook (Score:5, Informative)

    by Pseudonym (62607) on Sunday April 27, 2014 @09:40PM (#46856177)

    Real world business software developers just don't talk that way.

    "Real world business software developers" are those whose code ends up on The Daily WTF [].

    But seriously, welcome to the future. In the 1960s, "real world business software developers" thought that all this "object" stuff was a bunch of academic gobbeldygook at worst, or niche tool for people doing scientific simulations at best, rather than anything that would be useful with their hard-nosed COBOL. And in a sense, they were right. How would it help you speed up the overnight bank transaction updates? It probably wouldn't.

    This "academic rambling" probably won't help you write your business software today, but it just might help you avoid becoming obsolete tomorrow. Thankfully, you probably won't need to learn it until tomorrow.

  • by Anonymous Coward on Sunday April 27, 2014 @09:50PM (#46856229)

    Nobody is trying to pretend that there are no side effects you idiots. The point is that there are a lot of benefits that come when you clearly separate the parts of the program that have side effects from those that are pure and referentially transparent.

    It's trivial to ask a user for input, send packets across the network, query a database in Haskell and various other purely functional programming languages.

  • by Pseudonym (62607) on Sunday April 27, 2014 @10:20PM (#46856353)

    My impression of "pure" functional programming is that it's roughly like having only static classes and no static members in OOP.

    If you got that impression from TFA, then I can actually understand how you got it. Meijer's article was written for people already using not-fully-pure functional languages, where "class" means something slightly different than it does in a Simula-style OO language.

    The term "class" comes from von Neumann–Bernays–Gödel set theory. Naive set theory had issues like Russell's Paradox, which relies on notions like the "set of all sets". To remove this paradox, NBG set theory distinguishes between a "set" and a "class" [] (i.e. a collection of sets defined by a property they have in common). Some classes are sets, and some are not. A set is a collection of values, but a "class" is a collection of sets.

    In programming language theory, a "type" can be thought of as a set of values (e.g. "boolean" might be the set {true, false}). A "class" is a collection of types.

    When you write class Foo in (say) Java, you're actually doing three separate things.

    • You are declaring a new type, called Foo.
    • You are declaring a new collection of types (also confusingly called Foo), which will turn out to be the type Foo and all its subtypes.
    • You are declaring that the type Foo is a member of the collection Foo.

    In the Haskell class system, these three things are separated. This is why Haskell classes look more restrictive than classes that you might find in Java: a Haskell class only contains the parts that make it a class, not the parts that make it a type.

    Did that help?

  • by jbolden (176878) on Monday April 28, 2014 @08:10AM (#46857985) Homepage

    Pure functional programming means state is isolated not that it doesn't exist.

  • by gstoddart (321705) on Monday April 28, 2014 @01:45PM (#46861675) Homepage

    Yes. With increasing CPU speed, it makes sense to sacrifice "performance" for ease of programming.

    I worked with a guy who used to say that.

    He wrote shitty, un-maintainable code which he thought was elegant, and which in practice was garbage, full of ridiculous assumptions, and giant inefficiencies all in the name of him being able to invoke something in as few lines of code as possible, or without consideration for the cost of his framework. Half of his code went through massive setup tasks every time it was invoked or naively did something assuming it wasn't expensive. Over and over and over again.

    He said you should write first, and the optimize later. By the time we realized his code was so slow as to be unusable, he had painted himself into a corner, and there was no way to optimize his code except to get rid of it.

    Sometimes the things passed off as "ease" of programming is a thinly veiled decision to use known terrible methods in the expectation they're prettier.

    In my experience, some of these claims don't produce good code. They produce things the coder believes to be pretty, but which in practice is quite a mess.

    I've yet to be convinced you should start writing your code in a way you know is inefficient because your belief in your elegant solution, which is anything but. I've seen an O(n^2) algorithm called O(n^2) times, all because it was "cleaner" code, and the code assumed everything was a zero cost operation (or had hidden all of the aspects of that, so you don't find it until later).

    Performance is a real thing. I lament that people have now decided it's irrelevant to consider it.