Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Programming IT Technology

Threads Considered Harmful 266

LBR9 writes "James Reinders compares native threads with the goto statement so famously denounced 40 years ago by Edsger Dijkstra. Paraphrasing Dijkstra, he says they both 'make a mess of a program,' and then argues in favor of a higher level of abstraction. A couple of people commenting on the post question whether or not we should be even be treading into the 'swamp of parallelism,' echoing the view recently espoused by Donald Knuth."
This discussion has been archived. No new comments can be posted.

Threads Considered Harmful

Comments Filter:
  • No threads? (Score:5, Funny)

    by Anonymous Coward on Friday May 02, 2008 @09:06AM (#23274054)
    Alright, then all responses to this article need to fall under this one post.
  • by krog ( 25663 ) on Friday May 02, 2008 @09:15AM (#23274156) Homepage
    Because really, multithreading doesn't have to be hard [erlang.org]
    • Re: (Score:3, Informative)

      by LizardKing ( 5245 )

      The articles' author explicitly mentions Erlang as a potential solution to threading issues in other languages. In fact he's mainly concerned about POSIX pthreads, Boost threads, Java threads (and presumably Windows low level thread libraries). As I point out in another post below, I disagree with him lumping Java threads in with those used in most C/C++ libraries, as threading support is integrated into the language along with increasingly sophisticated locking support in the library which can be used if t

      • by Poltras ( 680608 )
        What's more, immutable types are intrinsically exception-safe, thread-safe and (normally) garbage collected (in C++ even more, when a String object is declared on stack and freed in its constructor). What's more, if using reference counting and copy-on-write, you have no slow down (most cases it's faster). Dunno why people haven't learn that yet...

        I've been working with Cocoa/Objective-C for a while, and I'm starting to develop some of its habits in C++ (immutable strings, smart pointers, Copy-on-Write obj

      • Re: (Score:3, Interesting)

        The problem with Java concurrency and threading is, all the locks are advisory. The synchronize statement is a nice bit of syntax, and making it apply to whole blocks of code was the Right Thing to do.

        The problem simply comes in that a program is not obligated to *use* synchronize, or any locking, when it accesses objects. Which means the code is totally unsuitable for integrating into a multithreaded program. And trying to backport thread-safety in is (currently) too difficult, as there are no tools to te
        • Re: (Score:3, Interesting)

          by Jellybob ( 597204 )

          I haven't studied Erlang yet, but threads (or more generally concurrency) done securely would require mandatory locking of all data.

          I may have misunderstood (I'm not exactly an expert in threading), but I believe that Erlang handles this is a scarily elegant manner... once assigned, a variable can not be changed.

          The = operator in Erlang should be looked at in the mathematical sense, so the following (pseudo) code would fail:

          a = 2
          a = 1 + 3

          Because 1+3 != 2

          (Disclaimer: I've briefly dabbled in Erlang, but anyth

        • Re: (Score:2, Interesting)

          In Erlang, variables aren't variable -- they're single assignment. (There is a process dictionary that is mutable, but it isn't usually used and other threads don't have access to it). Inter-thread communication is done via message passing (which may be local or over tcp/ip).
      • The main problem I have with Java threads, vs Erlang, is that Java threads are still using locks. They're locks with nice syntactic sugar on them, but locks nonetheless.
    • Except ... (Score:2, Insightful)

      Functional programming is hard, non intuitive and even plain distasteful to me. Now I know I'm an idiot, but the problem is most programmers are idiots. The language has to make parallelism easy for us, and if it starts out being functional it's already lost that battle.
      • Re: (Score:3, Interesting)

        by Fry-kun ( 619632 )
        Who was it that said that "Computer Science" was the worst thing to happen to both computers and science?

        Right now, everyone thinks in terms of Turing Machines - we tell the computer what to do. In functional programming, you tell the computer what result you want to achieve (in terms of formulas and such) - and it does it for you.

        It's hard to grasp for someone who's used to the Turing way, but it's not for someone who hasn't dealt with it. Programmer should be able to give hints to the CPU (for optimizatio
    • There are circumstances where threads are completely inappropriate. Let's say that you were hoping to build an app, that eventually would scale across a single-image cluster farm (for those not in-the-know, this isn't a beowulf cluster, but rather a cluster that you would add a new "node" to, that would then be treated as part of the collective resources of the single "machine". See SSI). Unlike on your single machines, a thread can not practically be migrated to a new "processor" on a different node, becau
  • by rsmith-mac ( 639075 ) on Friday May 02, 2008 @09:18AM (#23274196)
    I'm all for getting rid of threads, but what are you going to replace them with? Traditional functional languages may be the most obvious solution, but they're also among the most impractical of solutions. Is there anything else out there that can replace threading needs, without throwing out the book on programming? It seems like what we need hasn't been invented yet.
    • by abigor ( 540274 )
      http://en.wikipedia.org/wiki/Tuplespace [wikipedia.org]

      Threads only seemed to get really popular with Windows. Unix programming has typically always been multi-process with some form of shared memory. I've heard (and this is unconfirmed hearsay) that CreateProcess() on Windows is so heavyweight that they pushed for a comprehensive threading model instead.
    • processes (Score:4, Interesting)

      by mkcmkc ( 197982 ) on Friday May 02, 2008 @09:38AM (#23274458)
      Well, for starters, there's processes, which were invented in the 1960s. These may not handle every case, but in my experience they'd cover 95+%...
    • by arth1 ( 260657 )
      What I don't get is TFA's jump from "threads are bad" to "so they should be replaced with higher level of abstraction". That jump is quite illogical.
    • The Actor model, where each object is a separate thread, is the way to the future. When an actor sends a message to another actor, the message is stored in the target actor's message queue and the thread that represents the target actor is woken up to process the message. Results are delivered with future values.

      With the Actor model, whatever data parallelization is there in a program is automatically exposed.
    • Re: (Score:3, Interesting)

      by Cheesey ( 70139 )
      It's very unfashionable, but the Ada language has threads (tasks) as a first-order part of the language, so you don't need a specific class or library in order to use them. The things you need to use threads safely, like protected objects, are also part of the language. That means it is very easy to write correct multithreaded code in Ada. You won't be missing a mutex unlock command, and you won't be accessing something owned by another thread: the compiler won't let you.

      Threads are one thing that Ada does
  • by halivar ( 535827 ) <bfelger@noSpam.gmail.com> on Friday May 02, 2008 @09:21AM (#23274212)
    Goto's and global variables are not inherently wrong or evil. They are tools. Granted, they are tools that, if misused, will wreak havoc on your code's stability and maintainability. The same could be said, however, for pointers. Threads are dangerous, and require special care. This is not a reason to avoid them; it is only a reason to be incredibly careful with them.

    Use the best tool for the job, regardless of whether your CS professors demonized it or not.
    • Exactly. I'm sure we've all seen well written multi-threaded code (although admittedly not frequently) where each thread accomplished a set task and isolated the complexity of the task. In general, I think code that abuses pointers or inheritance is more difficult to to maintain.
    • I completely agree. If you want to take the guard off your skill-saw to save time doing a job, that's your business. You might finish faster, you might cut your foot off, but it's your call.

      I certainly abuse global variables in my code, but I write my code for me to solve my problems. The loss of encapsulation that results isn't so extreme when there's one author, and the gain in flexibility is pretty steep.

      However, I do think that avoiding nastiness is important, especially as the size of the group codi
    • Yes, but there appears to be a trend that is moving away from lower level coding, where there is a need to be careful and you do need to take your time to find an eliminate bugs; elegant and efficient programming does not seem to be as great a concern as I think it should be. If computers have more RAM then let garbage collection take care of your memory management woes; if CPU's are getting faster then maybe we should use inherently slower, but more programmer friendly, techniques to deal with threading.
      • Re: (Score:3, Interesting)

        If you don't think that we should be using hardware advances to make things easier on programmers, then why do you use compilers? Compared to hand-tuned machine code specific to each target processor, the stuff that comes out of a C compiler is slower by a factor of 2 to 10 at least. Other languages even more so.

        The fact of the matter is that high level tools are almost always the right choice, and the standard rules of optimization apply:

        Rules of Optimization:
        1. Don't do it.
        2. (experts only) Don't do i

    • by Anonymous Coward on Friday May 02, 2008 @10:46AM (#23275508)
      I know nobody born in the last 30 years has bothered to read his memo, but he doesn't pretend gotos are "evil". Just that people should adopt structured control flow structures instead. Meaning, design and use languages with such advanced features as "if/else" statements, and "while" loops, and "functions". Goto considered harmful was written in a time when most people were not using the fancy new languages that offered these features, and he was suggesting that they do so, in order to improve the quality of their code.

      Unless you seriously think people should use gotos instead of loops and if/else statements, then you don't disagree with Dijkstra.
      • I know nobody born in the last 30 years has bothered to read his memo
        Pesky kids! Goto somewhere that isn't my lawn!
    • "Gotos aren't damnable to begin with. If you aren't smart enough to distinguish what's bad about some gotos from all gotos, goto hell."
              -- Erik Naggum [wikiquote.org]
       
    • My point is that a thread is a thread. If using multiple concurrent threads is harmful, so is using a single thread. Single threading is less harmful than multithreading but harmful nonetheless. The thread is the reason for every ill that ails computing, from the reliability crisis to the parallel programming crisis. There is a way to design and program computers that does not involve threads at all. It's called the non-algorithmic software model. This is the way we should have been doing it in the first pl
  • Not really news (Score:5, Insightful)

    by AKAImBatman ( 238306 ) <<akaimbatman> <at> <gmail.com>> on Friday May 02, 2008 @09:21AM (#23274222) Homepage Journal
    Threads have been considered a "bad idea" by the CompSci profession for a little while now. So there is definitely nothing new about the author's statements. That being said, there is a fundamental difference between Dijkstra's paper 40 years ago and this summary: Dijkstra started his paper by holding up examples of better practices. Only after establishing their existence did he go on to suggest that the GOTO keyword was "too primitive" to be of practical use in software development.

    The author of this "article" (and I use the term loosely) doesn't really present such options. He hand waves a few work-in-progress solutions at the end, compares threads to GOTO statements, then asks the readers to fill in the (rather sizable) blanks.

    Long story short, it's a good topic of discussion, but the comparison to Dijkstra's famous paper is just an advertising point. Nothing more, nothing less.
    • Indeed, "X Considered Harmful" is such a common title that the Jargon File has a whole entry for it [catb.org]. And the entry, of course, cites Djikstra as the inspiration for the meme. (Others have disputed this, and claim it was common in mainstream journalism even earlier, but Djikstra's famous essay clearly put the phrase on the map in the CS/IT world.) Merely using the phrase hardly indicates that a comparison to Djikstra's classic work is necessary or justified.

      There has been, at least according to the aforem
  • Get a better programming language [vitanuova.com].
    And if don't like the taste of that one (what? Dennis Ritchie & Brian Kernighan not good enough for you!) there are other CSP [swtch.com] languages available (what? Sir Charles Hoare not good enough for you!)

    Seriously, this problem has been solved for 30 years.
  • I wonder how long this trend in this discussion will continue of most every post being its own post as opposed to replying to a previous post.
    • I wonder how long this trend in this discussion will continue of most every post being its own post as opposed to replying to a previous post.

      ... because, as the article says, "threads are bad." But I'll do threads, 'cuz I'm evil :-)

  • The motivation for widespread parallel programming seems to be that there is this upcoming glut of multicore PC chips that will get wasted if we all don't start writing concurrent programs. But is that really true? Most programs don't get any speedup from parallelization; at best a UI/core split helps the responsiveness of an app. Chances are a SMP OS would be able to reap most of the available gain.

    • Chances are a SMP OS would be able to reap most of the available gain.

      Doing what? I'd hope any OS (SMP or otherwise) doesn't significantly use the CPU itself. For an SMP OS to have something to gain by multicore, it has to come from a parallelizable workload, hence threads. (Unless you have enough runnable processes, but that often isn't so.)

      Unless you meant more responsiveness / lower interrupt latency or something like that in an OS, which is fair enough, but not exactly using multicore to its fulles

  • Hmm (Score:3, Interesting)

    by LizardKing ( 5245 ) on Friday May 02, 2008 @09:33AM (#23274382)

    The problem is not threads per se, but the way they are generally used in programming languages like C and C++. Although const correctness is understood by some C++ programmers, they appear to be a minority if I judge by the code I regularly review. There is also memory management which is a much bigger issue in threaded C/C++ applications than in applications written in Java. The Java library provides good examples of immutable classes, most prominently the String class, that remove a number of problems often encountered with their mutable cousins like std::string. Unlike std::string, I don't have to remember to make it immutable by constifying it or wrapping it. The presence of immutable classes, and the more adequate coverage given them along with threading in Java textbooks means that I disagree with the articles' author who lumps Java threads in with pthreads as a bad thing. What we need is more coverage of threading issues and how to alleviate them in intermediate level C/C++ textbooks, because despite the fact that threading is not built into those languages or their standard libraries, concurrency has become too important to ignore once you go beyond the basics.

    • by Animats ( 122034 )

      The problem is not threads per se, but the way they are generally used in programming languages like C and C++

      Right. C and C++ provide zero help in dealing with the isolation issues of threading. The languages have no concept of parallelism (there's "volatile", but that's about it.) There were 1980s languages that did offer some help, such as Modula I/II/III, Ada, and Occam. Java has some minimal concurrency support, although it's not well thought out.

      There's nothing wrong with multithreaded program

      • Re: (Score:3, Informative)

        by LizardKing ( 5245 )

        If you're familiar with CORBA, and underwhelmed by SOAP, then you might like to check out ICE [zeroc.com]. It's an attempt to do CORBA "better" - in other words, without the designed by committee and everything bar the kitchen sink aspects that ruined CORBA. I was initially a little bit sceptical, but having played around with it for a notifications system I've become really impressed.

        • by Animats ( 122034 )

          you might like to check out ICE

          There are lots of little IPC packages. Too many, actually. OpenRPC is somewhat dated, but OK. The last time I had to do this, I used Python, CPickle and pipes. But that wasn't a IPC-intensive application. When we did a robot vehicle for the DARPA Grand Challenge, there were about 20 processes communicating via QNX MsgSend/MsgReceive, and that worked out well, including dealing with hard real-time constraints, heavy computation on the same CPU as low-level hard real time

      • by dodobh ( 65811 )
        Multi process event driven programming, with lightweight message passing.

        Don't call a method in the other process, send it data instead.
    • Re: (Score:3, Interesting)

      Yeah. The main problem I've seen is that many developers apparently don't understand event-driven programming at all, so they end up creating dozens of threads to poll for various conditions, and then usually fail to come up with a thread-safe way of coordinating the whole mess. Threads aren't the problem; applications will always use threads, even if it's not explicit. Incompetent developers are the problem.
    • Exactly. We need better a bigger focus on multi-threading within the current languages, if still possible.

      I've just seen a very nice presentation on how null pointer exceptions can be avoided using annotations in the Java language. The same presentation also showed how to annotate variables to be immutable: changes to the variable would be picked up by the compiler.

      That said, with the possible exception of Java arrays, it's pretty easy to make variables immutable (basic types are referenced by value, collec
  • You know the big difference between TFA and Edsger Dijkstra's paper?

    The second one made an argument, showed alternatives that were at least summarili demonstrated to be better and used reasoning.

    The first one just says "Edsger Dijkstra's paper said goto was harmful and he ended up being right, thus if I say threads are harmful, I'm also right. Oh and here are some threading libraries I've found in a quick google search, they might be better."
  • Slowaris is behind threading. Because it was so slow to create new processes, the only way they could compete with Linux (which forks very quickly) was to create threading. Threading *is* faster than forking, but it also creates HUGE synchronization problems. You can overcome these problems, at the cost of more complicated, more fragile programs that take more time to write and more time to get right.

    Linux doesn't need threading.
    • by LizardKing ( 5245 ) on Friday May 02, 2008 @10:02AM (#23274862)

      Complete crap. Threads solve a number of programming problems much more elegantly than forked processes and sharing data through some IPC mechanisms. Anecdote time: a stock price system I worked on. The first generation used separate processes for a single writer and a large number of readers, with shared memory for interprocess communication. This was switched to a threaded implementation for the second generation, which was faster, even though it was using the old LinuxThreads implementation, and more easily maintained as the pthreads API is much richer than IPC ones.

      • Threads work fine (Score:3, Interesting)

        by mugnyte ( 203225 )
        Going beyond what you state - indeed I agree: Threads have a useful place in the toolbox. Perhaps this will mark me as "old" at some point in the world of programming.

        I use them routinely on MS platforms. Background threads for write-behind mechanisms, for self-tuning caches, for animation. The sharing between threads is the more-precise problem, not threads itself. If one knows how to examine the context of a thread, one can see all shared pints and code accordingly. This is no different th
    • by ceoyoyo ( 59147 )
      You can overcome synchronization problems with threads in EXACTLY the same way you do it with processes.

      The problem seems to be that some of the more popular languages don't enforce that behaviour. In other words, the programmer has to be (gasp) competent!
    • Yeah, even better, use complete different VM's. Of course, if you want any communication between those you need to use sockets or files, including authentication etc.

      With different processes you have the same problem as between two different VM's, although you may make use of non-networked resources such as pipes and local files.

      The big advantage of threads is that they run in the same memory space. If you use managed code within the threads, this means that the threads cannot access the other thread's memo
  • A combination of specific implementations of threading and developers who use programming methodologies that were outdated already when the PDP-11 actually reached the market is.

    As a comparison, I learned to program properly on the Amiga, and its OS was natively threaded, and the architecture actually encouraged it(Likewise, unlike so many people who grew up with PC's, I also feel at home with programming for something like the PS3 or similar), and therefore I inherently think about how things can be split
  • By Edward Lee of the EECS department at Berkeley: http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.html [berkeley.edu]. Worth reading if you work with threads.
  • is that they may put threading problems beyond the control of the developer. What happens when your language environment has bugs in how it generates and manages threads? How much control would you have over what is being done behind the scenes? For that reason alone, I think it would be better to keep the use of threads purely optional, preferably as part of a library, rather than put into the core language itself as a series of features and keywords.
  • by jythie ( 914043 ) on Friday May 02, 2008 @10:01AM (#23274838)
    The problem is that programmers are generally untrained in them or trained very poorly.

    Writing a safe threaded application is not a difficult task, but it is a different task then writing a single-threaded app. And unfortunately CS programs, books, tutorials, etc, still train people in the single-thread mindset and yes the programs they produce end up being buggy.

    And I'm not sure these 'high abstraction' languages are really the 'answer'. I have found that often in higher level solutions the results become even less predictable and tracing what is actually happening when becomes either extremely difficult, extremely inefficient, or just back to the single-thread mentality.

    I think the OP talking about how one might be next writing a parrell app shows the real flaw here... the author is going from one mentality, entering another without really thinking it through, and then complaining when old methods don't work well. Take a programmer who STARTED in parrell space and you don't run into these problems.
    • by Panaflex ( 13191 )
      I agree 100 percent!

      The problem isn't with threading - it's how developers approach threading.

      All shared write structures must have locking! Use global variables sparingly - perhaps only for communicating exit procedures.

      And if you can - use a garbage collector. Seriously - if you're not tied to real-time transactions, a GC is the way to go.
      • by jythie ( 914043 )
        *nods* I love ObjC's retain/release GC style for threading.

        I actually think a good set of 'training wheels' for getting developers thinking about threaded environments is to introduce them to fork with anonymous shared memory. Much easier to encapsulate things but still teaches how think in terms of multiple threads.
  • That way you CSers won't take over my turf as a FPGA designer.
  • Every web server that can handle more than one client simultaneously is basically multi-threaded. Its painfully clear that threads have been an enormously successful programming model and are here to stay. Concurrency is difficult to understand but that doesn't mean it's not necessary.
  • I've already said this about a dozen times on /., but here goes anyway ;)

    In 2001 I worked at CERN, writing simulation and analysis software on a dual P3 machine. The language was Fortran 90, and the compiler made use of SIMD (MMX/SSE) on both processors to parallelize matrix algebra.

    The parallelism was abstracted away quite nicely, just as the article suggested. There was probably some thread/process creation under the hood to make use of both CPUs, but the calculations were basically SIMD in natur

  • If we are to have any hope of using computers to solve the most interesting problems then we will eventually be using parallelism on a massive scale. This is how the physical world works. All those atoms and particles swirling around you are not a serial batch process.
  • The phrase "considered harmful" implies that there is a large community consensus that something is considered harmful.

    But it's never used like that; it's always used by one opinionated loudmouth who is the only one in the world who considers the practice harmful. You want another example? Look at that crock of an essay "Reply-to munging considered harmful"; the only one who considers reply-to munging harmful is the opinionated loudmouth who wrote that essay, yet the title falsely implies that the community
  • The root of the problem is shared state, operations on shared state need to have ACI properties - atomic, consistent, and independent. Some languages / environments, like Erlang and QNX, solve this problem by basically getting rid of shared state and making all threads communicate with each other over socket-like abstractions. With common programming languages the solution is mutually exclusive locks. You lock up the memory you're working on then unlock it when you're done.

    Locks have problems. In order
  • The problem isn't threads, it's threading models that depend on shared state to communicate between threads. If you explicitly pass data between threads using a message paradigm you get most of the performance advantages of threads with the ease of programming in independent processes. Design the code around a model like message passing (as in the Amiga Exec, which was really a threaded share-everything environment, or QNX) or a database-style access method (which is what I've been doing in speedtables) and
  • that programmers are heir to, I would suggest that most people make hash out of single thread programming...we were doing that for years before we started working in the kernel. So, whats a little more complexity when we already HAVE to have ways to combat it in design and test tools?

    I suppose our confidence in driving toward parallelism rests on intuition, such as analogizing that the brains of all higher animals are parallel processors therefore some solution to the problems of parallel computation mus
  • chickens (Score:4, Funny)

    by Viking Coder ( 102287 ) on Friday May 02, 2008 @03:36PM (#23279446)
    Q) Why did the multithreaded chicken cross the road?
    A) to To other the side. get the
  • by ulatekh ( 775985 ) on Friday May 02, 2008 @07:54PM (#23281374) Homepage Journal

    I'm tired of reading replies to this article that evangelize some fancy-schmancy high-level solution. I wonder if these advocates have ever tried writing production code in such an environment.

    Let me give you a wonderful example of when theory simply doesn't meet reality.

    Recently, I wrote a bunch of multi-threaded code for a next-generation asymmetric-multiprocessing game console that shall remain nameless. Its operating system has a wonderful complement of synchronization features. There's the usual mutex lock/unlock, and the usual condition signal/wait, but there are also event queues (queues of generic events that can be passed between threads running on different types of processors), lightweight mutexes/conditions, spinlocks, semaphores, reader/writer locks, and so on and so on. Truly a rich palette from which one can paint a wonderfully synchronized multi-threaded application! I then proceeded to try to rewrite a key section of our code in a very multi-threaded way.

    The problem was, the first version of this code added NINETY milliseconds per frame to our main thread. A profile showed that nearly all of the extra time was spent in the operating system's synchronization features.

    After much rewriting and much pain, I stopped using all of the operating system's synchronization features, and used processor-level atomic operations instead, and finally, the extra code accounted for only FOUR milliseconds per frame in our main thread (with the rest of the time successfully farmed out to separate threads).

    I challenge anyone with a fancy-schmancy automatic concurrency solution to demonstrate that it doesn't have this problem.

fortune: cannot execute. Out of cookies.

Working...