Is Parallel Programming Just Too Hard? 680
pcause writes "There has been a lot of talk recently about the need for programmers to shift paradigms and begin building more parallel applications and systems. The need to do this and the hardware and systems to support it have been around for a while, but we haven't seen a lot of progress. The article says that gaming systems have made progress, but MMOGs are typically years late and I'll bet part of the problem is trying to be more parallel/distributed. Since this discussion has been going on for over three decades with little progress in terms of widespread change, one has to ask: is parallel programming just too difficult for most programmers? Are the tools inadequate or perhaps is it that it is very difficult to think about parallel systems? Maybe it is a fundamental human limit. Will we really see progress in the next 10 years that matches the progress of the silicon?"
Nope. (Score:2, Insightful)
What's hard, is trying to write multi-threaded java applications that work on my VT-100 terminal.
our brains aren't wired to think in parallel (Score:5, Insightful)
I think the biggest reason why it is difficult is that people tend to process information in a linear fashion. I break large projects into a series of chronologically ordered steps and complete one at a time. Sometimes if I am working on multiple projects, I will multitask and do them in parallel, but that is really an example of trivial parallelization.
Ironically, the best parallel programmers may be those good managers, who have to break exceptionally large projects into parallel units for their employees to simultaneously complete. Unfortunately, trying to explain any sort of technical algorithm to my managers usually exacts a look of panic and confusion.
Have some friggin' patience (Score:4, Insightful)
Oh noes! Software doesn't get churned out immediately upon the suggestion of parallel programming! Programmers might actually be debugging their own code!
There's nothing new here: just somebody being impatient. Parallel code is getting written. It is not difficult, nor are the tools inadequate. What we have is non-programmers not understanding that it takes a while to write new code.
If anything, that the world hasn't exploded with massive amounts of parallel code is a good thing: it means that proper engineering practice is being used to develop sound programs, and the jonny-come-lately programmers aren't able to fake their way into the marketplace with crappy code, like they did 10 years ago.
Programmers (Score:2, Insightful)
No need to worry about memory management, java will do it for you.
No need to worry about data location, let the java technology of the day do it for you.
No need to worry about how/which algorithm you use, just let java do it for you, no need to optimize your code.
Problem X => Java cookbook solution Y
Parallel Language... (Score:3, Insightful)
Yes, difficult, but our brains are not limited. (Score:2, Insightful)
Programmers that are accustomed to non-parallel programming environments forget to think about the synchronization issues that come up in parallel programming. Several conventional programs do not take into account synchronization of the shared memory or message passing requirements that come up for these programs to work correctly in a parallel environment.
This is not to say that there will not be any progress in this field. There will be and there has been. The design techniques and best practices differ for parallel programming than for the conventional programming. Also currently there is limited IDE support for debugging purposes. There are already several books on this topic and classes in the universities. As the topic becomes more and more important, computer science students will be required to take such classes (as opposed to it being optional) and more and more programmers that know and are experts in parallel programming will be churned out. It's just not as popular because the universities don't currently seem to make it a required subject. But that will change because of the advancement in hardware and more market demand for expert parallel programmers.
Our brains might be limited about other things, but this is just a matter of better education. 'Nuff said.
Clusters? (Score:4, Insightful)
Funny, I've seen an explosion in the number of compute clusters in the past decade. Those employ parallelism, of differing types and degrees. I guess I'm not focused as much on the games scene - is this somebody from the Cell group writing in?
I mean, when there's an ancient Slashdot joke about something there has to be some entrenchment.
The costs are just getting to the point where lots of big companies and academic departments can afford compute clusters. Just last year the price of multi-core CPU's made it into mainstream desktops (ironically, more in laptops so far). Don't be so quick to write off a technology that's just out of its first year of being on the desktop.
Now, that doesn't mean that all programmers are going to be good at it - generally programmers have a specialty. I'm told the guys who write microcode are very special, are well fed, and generally left undisturbed in their dark rooms, for fear that they might go look for a better employer, leaving the current one to sift through a stack of 40,000 resumes to find another. I probably wouldn't stand a chance at it, and they might not do well in my field, internet applications - yet we both need to understand parallelism - they in their special languages and me, perhaps with Java this week, doing a multithreaded network server.
Yes and no. Languages help (or hinder). (Score:2, Insightful)
A special case of this is seen in the vector units found in today's CPUs: MMX, SSE, Altivec, and so forth. You can't write a C compiler that takes advantage of these units (not easily, anyway), because the design of C means that a programmer takes the mathematics and splits it up into a bunch of sequential instructions. Fortran, on the other hand, is readily adapted, because the language is designed for mathematics; you tell it what you need to do, but not how to do it.
In the same way, trying to cram parallelism into a program written in C is a nightmare. Semaphores, exclusive zones, shared variables, locking, deadlocks
As time marches on, and the reality of the situation becomes increasingly obvious, I would expect that performance-intensive apps will start to be written in languages better suited to the domain of parallel programming. Single threaded apps will remain - eg, Word doesn't really need any more processing power (MS' best efforts to the contrary notwithstanding) - and C-like languages will still be used in that domain, but I don't think inherently sequential languages, like C, C++, and others of that nature, will be as common in five or ten years as they are today, simply because of the rise of the parallel programming domain necessitating the rise of languages that mimic that domain better than C does.
Re:our brains aren't wired to think in parallel (Score:5, Insightful)
You may want to look into Erlang, which does two things that will interest you:
There are still concurrent problems which are hard, but generally it boils down to the problem being hard instead of the language making the problem harder to express.
Re:our brains aren't wired to think in parallel (Score:3, Insightful)
I don't know about the rest of Slashdot, but I read comments in a linear fashion - one comment, then the next comment, etc. Most people that I have known read in a linear fashion.
Walking and chewing gum is a parallel process. Reading is generally linear.
Yes, because programmers are too conservative (Score:5, Insightful)
No, the different sorts of paradigms I'm talking about no shared state, message passing concurrency models ala CSP [usingcsp.com] and pi Calculus [wikipedia.org] and the Actor Model [wikipedia.org]. That sort of approach in terms of how to think about the problem shows up in languages like Erlang [erlang.org], and Oz [wikipedia.org] which handle concurrency well. The aim here is to make message passing and threads lightweight and integrated right into the language. You think in terms actors passing data, and the language supports you in thinking this way. Personally I'm rather fond of SCOOP for Eiffel [se.ethz.ch] which elegantly integrates this idea into OO paradigms (an object making a method call is, ostensibly, passing a message after all). That's still research work though (only available as a preprocessor and library, with promises of eventually integrating it into the compiler). At least it makes thinking about concurrency easier, while still staying somewhat close more traditional paradigms (it's well worth having a look at if you've never heard of it).
The reality, however, is that these new languages which provide the newer and better paradigms for thinking and reasoning about concurrent code, just aren't going to get developer uptake. Programmers are too conservative and too wedded to their C, C++, and Java to step off and think as differently as the solution really requires. No, what I expect we'll get is kluginess retrofitted on to existing languages in a slipshod way that sort of work, in as much as it is an improvement over previous concurrent programming in that language, but doesn't really make the leap required to make the problem truly significantly easier.
Amdahl's law (Score:5, Insightful)
Now there's been lots of work on eliminating those single-threaded bits in our algorithms, but every new software problem needs to be analyzed anew. It's just another example of the no-silver-bullet problem of software engineering...
Re:our brains aren't wired to think in parallel (Score:5, Insightful)
Re:our brains aren't wired to think in parallel (Score:5, Insightful)
The first problem is that people who make good managers make lousy coders. The second problem is that people who make good coders make lousy managers. The third problem is that plenty of upper management types (unfortunately, I regard the names I could mention as genuinely dangerous and unpredictable) simply have no understanding of programming in general, never mind the intricacies of parallelism.
However, resource management and coding is not enough. These sorts of problems are typically either CPU-bound and not heavy on the networks, or light on the CPU but are network-killers. (Look at any HPC paper on cascading network errors for an example.) Typically, you get hardware which isn't ideally suited to either extreme, so the problem must be transformed into one that is functionally identical but within the limits of what equipment there is. (There is no such thing as a generic parallel app for a generic parallel architecture. There are headaches and there are high-velocity exploding neurons, but that's the range of choices.)
Re:our brains aren't wired to think in parallel (Score:3, Insightful)
bad education (Score:4, Insightful)
These same programmers often think that ideas like "garbage collection", "extreme programming", "visual GUI design", "object relational mappings", "unit testing", "backwards stepping debuggers", and "refactoring IDEs" (to name just a few) are innovations of the last few years, when in reality, many of them have been around for a quarter of a century or more. And, to add insult to injury, those programmers are often the ones that are the most vocal opponents of the kinds of technologies that make parallel programming easier: declarative programming and functional programming (not that they could actually define those terms, they just reject any language that offers such features).
If you learn the basics of programming, then parallel programming isn't "too hard". But if all you have ever known is how to throw together some application in Eclipse or Visual Studio, then it's not surprising that you find it too hard.
Two Problems (Score:4, Insightful)
Re:Not justifyable (Score:5, Insightful)
If your programmers are telling you they need more time to turn a single-threaded game into a multi-threaded one, then the correct solution IS to push the game out the door, because it won't benefit performance to try to do it at the end of a project. It's a fundamental design choice that has to be made early on.
Re:our brains aren't wired to think in parallel (Score:5, Insightful)
Every process is serial from a broad enough perspective. Eight hypothetical modems can send 8 bits per second. Or are they actually sending a single byte?
And what of the other limit? (Score:2, Insightful)
So we're looking at multiple tasks. Obvious gaming and operating systems get to split by engine (graphics, network, interface, each type of sound, A.I., et cetera). I'd guess that there is a limit of about 25 such engines that anyone can dream up. Obviously raytracing gets to have something like 20 cores per pixel, which is really really cool. But that's clearly the exception.
So really, in my semi-expert and wholy professional opinion, I think priming is the way to go. That is, it takes the user up to a second to click a mouse button. So if the mouse is over a word, start guessing. Get ready to bold it, underline it, turn it into a table, look up related information, pronounce it, stretch it, whatever.
Think of it as: "we have up to a full second to do something. We don't know what it's going to be, but we have all of these cores just sitting here. So we'll just start doing stuff." It's the off-screen buffering of the user task world.
Which results in just about any task, no matter how complicated, can be instantly presented -- having already been calculated.
Doesn't exactly save power, but hey, the whole point is to utilize power. And power requires power. There must be someone's law -- the conservation of power, or power conversion, or something that discusses converting power between various abstract or unrelated forms -- muscle to crank to generator to battery to floatig point to computing to management to food et cetera; whatever.
Right, so priming / off-screen buffering / preloading. Might as well parse every document in the directory when you open the first one. Might as well load every application on start-up. Can't have idle cores lying around just for fun.
Has anyone ever thought that maybe we won't run out of copper after all? I'd bet that at some point in the next twenty years, we go back to clock speed improvement. I'd guess that it's when core busing becomes rediculous.
I still don't understand how we went from shared RAM as the greatest thing in the world to GPU's with on-board RAM, to CPU's with three levels of on-chip RAM, to cores each with their own on-core RAM.
Hail to the bus driver; bus driver man.
The Bill comes due (Score:5, Insightful)
Re:our brains aren't wired to think in parallel (Score:3, Insightful)
It could also be stated in a twisted manner of your view - looked at narrowly enough, anything can be considered to be parallel. However, realistically, we know that isn't really the case.
Re:Nope. (Score:5, Insightful)
There is a very real limit as to how much you can parallelize standard office tasks.
Re:Nope. (Score:3, Insightful)
The iLife suite. Especially iMovie. And let's not forget the various consumer and professional incarnations of Photoshop--none of them support more than two cores.
What kind of parallel programming? (Score:4, Insightful)
Are we talking about situations that lend themselves to data flow techniques? Or Single Instruction/Multiple Data (e.g. vector) techniques? Or problems that can be broken down and distributed, requiring explicit synchronization or avoidance?
I agree with other posters (and I've said it elsewhere on
Then there's the related issue of long-lived computation. Many of the interesting problems in the real-world take more than a couple of seconds to run, even on today's machines. So you have to worry about faults, failures, timeouts, etc.
One place to start for distributed processing kinds of problems, including fault tolerance, is the work of Nancy Lynch of MIT. She has 2 books out (unfortunately not cheap), "Distributed Algorithms" and "Atomic Transactions". (No personal/financial connection, but I've read & used results from her papers for many years...)
I get the sense that parallelism/distributed computation is not taught at the undergrad level (it's been a long time since I was an undergrad, and mine is not a computer science/software engineering/computer engineering degree.) And that's a big part of the problem...
dave
Re:Two words: map-reduce (Score:3, Insightful)
(Score:4, Interesting)
by Anonymous Coward on 07-05-29 6:29 (#19305183)
Implement it, add CPUs, earn billion$. Just Google it.
Wow, fantastic. Turns out parallel programming was really really easy after all. I guess we can make all future computer science courses one week then, since every problem can be solved by map-reduce! The AC has finally found the silver bullet. Bravo, mods.
Re:A different approach to parallel programming (Score:4, Insightful)
Basically like a compiled function?
Parallel Programming is EASY (APL, J, K langauges) (Score:2, Insightful)
I was so imnpressed with APL that I implemented an APL like derivative language
called Simmunity which is a hybrid compiler for an APL like syntax. Simmunity
can be targeted at a parallel processor implemented in FPGA's that provides
multiple simultaneously executing vector/matrix operations in parallel...
(stay tuned for more info)
The quintessential parallel database programming language available is clearly
KxSystems Q and KDB+
See the faq at http://kx.com/faq/ [kx.com] and tutorial at http://kx.com/q/d/primer.htm [kx.com]
Ken Iverson invented APL and then the J language which anyone interested in
really discussing parallel programming should look at closely. The Connection
Machine LISP had many of the APL operators/functions and certainly Map/Reduce
that it added to provide a convienent parallel expression langauge
Most programmers think in terms of sequential code execution with threads and
processes. APL incourages the programmer to conceive of programming using
vector and matrix operations to process strings and numbers and manipulate
data like a spreadsheet. An application that people might be familiar with
is Lotus Improv which provided naturally parallel expressions based on a subset
of APL operations.
Cheers, Simbuddha
Re:Not justifyable (Score:3, Insightful)
Coder: Well I suppose it can but...
PHB: Shove it out the door then.
I think the PHB may have a point. Games are expensive to make, gamers are fickle, and if reviews say a game "looks dated" it is a death sentence for sales. Rewriting an engine to be multi-threaded after it is done will take a lot of time and money, and most PCs out there have more dated hardware than you think, so most current players won't see any benefints whatsoever from it anyway.
Patience is not a goal here. (Score:3, Insightful)
2. Parallel code is a monster to write. I'm not talking simple scatter-gather data spreaders. Imaging Adobe photoshop running across a 400 machine cluster, handling hundreds of users at a time. The data concurency issues, data residence, locking, message handling, message reordering Total bloody nightmare...., If youve parallelized a markov model it doesn't really compare.
3. The tools arent adequate. tracing a data race, or deadlock in a cluster is a beast. MPI and PVM are nice but are really narrow in the scope that they handle problems.
4. It isnt just non-programmers.. Parallel is a whole different scale of complexity... Almost everything I see is a "parallelize the brute zones of a specialty engine once it works in serial"..... Its an important baby step and it really blows non-programmers for a loop. But we are a far sight from having an implicitly parallel version of MS-word.
5. Parallel isnt new, dual cpu boxes have been in userspace since the late 90's, it has been mostly ignored by applications. The use of network resources is horribly behind the times. The ability to aggregate resources on the fly is a total joke compared to where it should be.
Storm
Re:bad education (Score:2, Insightful)
I get the feeling allot of you hardcore guys miss the idea that multicore systems are still very helpful, regardless of rather a thread model is implemented or not. When a machine is responsible for running a RDBMS, webserver, FTP server, mail server and an application pool, adding a threading model to your application can become a serious bottleneck.
I would venture to guess, that the majority of professional (not at an academic, student or research level) would hardly ever have the need for parallel systems.
The point I am trying to make, is; the benefits of a multicore machine, go FAR beyond individual applications in the real world.
Re:Nope. (Score:1, Insightful)
Re:Non-Repeatable Errors (Score:4, Insightful)
Re:Nope. (Score:4, Insightful)
That's difficult to say because true functional programming is so vastly different. We have so much time and energy invested in imperative algorithms that it's difficult to know whether or not functional methods are easier or more difficult to design.
In a sense, it's like saying that Hybrid Synergy Drive is simpler than a traditional transmission. It's true on a conceptual level, but Toyota hasn't tried to put HSD everywhere it has put a traditional transmission and therefore we may not fully understand the complexities of trying to extrapolate the results to the entire problem space.
So, I think the bottom line is, functional programming probably wouldn't be any harder if it existed in a world where it was dominant.
Remember, a large part of being an effective programmer is understanding how (and if) your problem has been solved before. It may be far from optimal, but CPUs are largely designed to look like they are executing sequential instructions. Multicore and multithreaded designs are changing that model, but it's not a change that happens overnight.
Re:Nope. (Score:5, Insightful)
Closer is this: After some more work and a rewrite (for other reasons), I had "Fracked" running n threads, each rendering 1/n of the display. Data parallelism == easy parallelism.
But a lot of problems don't fit these models, and need a LOT of thought put into how to parallelize them. It's likely that some problems in P are not efficiently parallelizable.
Re:our brains aren't wired to think in parallel (Score:1, Insightful)
Re:Nope. (Score:2, Insightful)
Re:We don't think in recursion either (Score:3, Insightful)
It goes to show to become a better programmer investigate as many programming paradigms as possible.
Nitpick (Score:3, Insightful)
The language isn't called LabView, it's called "G". LabView is the whole package (G interpreter/compiler, drivers, libraries, etc).
And, yeah, G is pretty cool. National Instruments offers a slightly dated, but otherwise completely functional version of LabView for free for noncommercial use.
That's irrelevant. (Score:5, Insightful)
Our cognitive system does many things at the same time, yes. That doesn't answer the question that's being posed here: whether explicit, conscious reasoning about parallel processing is hard for people.
Re:Nope. (Score:3, Insightful)
The truth is languages such as C/C++ and Java(to lesser extent) are not good languages to write parallel code in. They do not have the constructs built in such as process communication links, mutexes, semaphores that parallel programs rely on. You end up writing aracne code to get this sort of functionality.
The other problem is that the debugging tools make montoring multiple parallel programs difficult.
20 years ago I was using a language called Occam that made writing multiple parallel programs a breeze, 10 years ago I was using erlang which again allowed us to easily generate thousands of lightweight threads.
Today I am still struggling to write the same things in C++ using the standard windows process constructs.
Re:Nope. (Score:1, Insightful)
Having said that, erlang is a very nice language to learn. I use it to control message passing between bits of number crunching code on different machines, as it is easier than writing boilerplate in c++.
Actually it's just pipelined (Score:5, Insightful)
Try reading two different texts side by side, at the same time, and it won't work that neatly parallel any more.
Heck, there were some recent articles about why most Powerpoint presentations are a disaster: in a nutshell, because your brain isn't that parallel, or doesn't have the bandwidth for it. If you try to read _and_ hear someone saying something (slightly) different at the same time, you just get overloaded and do neither well. The result is those time-wasting meetings where everyone goes fuzzy-brained and forgets everything as soon as the presentation flipped to the next chart.
To get back to the pipeline idea, the brain seems to be quite the pipelined design. Starting from say, the eyes, you just don't have the bandwidth to consciously process the raw stream of pixels. There are several stages of buffering, filtering out the irrelevant bits (e.g., if you focus on the blonde in the car, you won't even notice the pink gorilla jumping up and down in the background), "tokenizing" it, matching and cross-referencing it, etc, and your conscious levels work on the pre-processed executive summary.
We already know, for example, that the shortest term buffer can store about 8 seconds worth of raw data in transit. And that after about 8 seconds it will discard that data, whether it's been used or not. (Try closing your eyes while moving around a room, and for about 8 seconds you're still good. After that, you no longer know where you are and what the room looks like.)
There's a lot of stuff done in parallel at each stage, yes, but the overall process is really just a serial pipeline.
At any rate, yeah, your eyes may already be up to 8 seconds ahead of what your brain currently processes. It doesn't mean you're that much of a lean, mean, parallel-processing machine, it just means that some data is buffered in transit.
Even time-slicing won't really work that well, because of that (potential) latency and the finite buffers. If you want to suddenly focus on another bit of the picture, or switch context to think of something else, you'll basically lose some data in the process. Your pipeline still has the old data in it, and it's going right to the bit bucket. That or both streams get thrashed because there's simply not enough processing power and bandwidth for both to go through the pipeline at the same time.
Again, you only need to look at the fuzzy-brain effect of bad Powerpoint presentations to see just that in practice. Forced to try to process two streams at the same time (speech and text), people just make a hash of both.
Re:No. No. No. (Score:3, Insightful)
You can get a 4 core chip for under $600 now because of it. If you are into high performance computing then you should beg the game developers for something that can use as many cores as you can throw at it. Because as you said, you are 0% of the market, and gamers are a huge chunk of it.
Re:Nope. (Score:3, Insightful)
It is hard, but doable (Score:2, Insightful)
Looking at the proposals for switching languages, they just shift the problem around (as another poster has already mentioned). The only thing new that might be useful is the idea of "Guards" from Haskell (for why, see below). But you don't need a new programming language to implement that, with a little imagination you can build it into a C++ library.
In my experience, 99% of the problems are due to two problems: overlooked shared resources and deadlocks. An example of the first problem might be when two threads are pushing/popping from the same queue, causing duplicates or skips. An example of the second would be thread A has resource X and is waiting for resource Y, while thread B has resource Y and is waiting for resource X. Guards could possibly make it easier to avoid some forms of deadlock by making it easier to acquire multiple resources simultaneously with less risk of deadlock. I think I have a new library to go code now... ;-)
A better approach to parallel programming (Score:3, Insightful)
What is needed is a new software paradigm that uses parallelism to start with. It should be a system based on elementary communicating objects. And the best way to depict parallel objects and signal pathways is to do it graphically. I say that the entrire computer industry have been doing it wrong from the beginning (since Lady Ada Lovelace). It is time to move away from the algorithmic model and adopt a non-algorithmic, synchronous software model. Even our processors should be redesigned for the new model. Only then will find that parallel programming is not onl y easy but the only way to do it.
Re:our brains aren't wired to think in parallel (Score:3, Insightful)
The problem isn't that our brains our consciousness is unable to do this, however. All programming involves interactions between details that potentially is too much for us to handle. Good programming is limiting interactions to a small number of well defined interfaces, and very good programming reduces the amount of context you have to consider to what you can visually scan in a few seconds.
The problem is that the paradigms for interacting parts are not familiar.
I've been through this several times in my career. When I started most programs were of a filter-ish nature; they started at the top and fell to the bottom, with a number of abstracted side trips via subroutine calls. Then came the GUI, and even basic programs became systems like operating systems, in that they ran for an indeterminate length of time, orchestrating responses to a stream of events of unpredictable, potentially unlimited length. This was a massive shift for programmers; many thought it was impossibly difficult for a typical programmer to work on this kind of system.
Same thing happened when web applications came along. We supposedly couldn't use the "simple" model of a single process working in a common address space anymore as applications started to take on a distributed flavor, distributed not across address spaces, but application hosts: browser based javascript, web application container, web services hosts, database hosts etc.
Parallel programming isn't the last new paradigm programmers will be asked to absorb. People will do it, then move on to complain about the next big thing.
Re:bad education (Score:4, Insightful)
Not to be misunderstood: I do think there is a place for good tools to support working programmers, it's just that the tools that are widespread are mainly aimed at getting people "hooked" on them, not at supporting experienced professional programmers optimally.
I think you are misunderstanding the objective of the tools you are naming... they have been conceived to help developers in their work. It is not Microsoft's or Eclipse foundation (or the Gnome guys) that wannabe "code monkeys" play with their tools for 2 weeks and add them as a skill in their curriculum. Look at those tools as if they were carpenter tools like an electric drill or handsaw. They are conceived to make easier the work of the carpenter, but yo *do* need to *know* what you are doing. You can not say that they hurt the carpenters productivity because they incite people that do not know anything about woodworking to grab a DIY book and put an add on the newspaper...
Re:Nope. (Score:3, Insightful)
Agreed. I just replaced my old machine with a dual-core machine a few months ago, and I find that I basically always have one cpu idle. When things are slow, they're usually slow because they're waiting on I/O. At one point I thought I had a killer app, which was ripping MP3s from CDs. I would run cdparanoia from the command line to rip, and then when that was done, I would have 01.wav, 02.wav, 03.wav, etc. I wrote little one-liner scripts that would then run the MP3 encoder so that one cpu would be doing all the odd-numbered tracks, the other cpu all the evens. Worked great. But then I decided to automate the whole thing a little more, so I wrote a fancy perl script that would start three sub-processes, one for ripping and two for encoding the tracks as soon as they were done being written to disk. Well it turned out that a single cpu was almost always capable of keeping up with the speed of my cd drive, so I still ended up with one idle cpu after all. You might think, "Hey, you can use that other cpu to run your gui apps." Actually, the linux scheduler is good enough that as long as I run the MP3 encoder with "nice," it has zero impact on interactive jobs, even when they're sharing a cpu. Typing characters into a word-processor just isn't a very cpu-intensive application.
Re:Nope. (Score:4, Insightful)
The reason the problems don't fit these models is moreso that we're used to thinking about algorithms as an ordered list of steps, rather than a set of workers on an assembly line (operating as fast as the slowest individual worker).
Re:I blame the tools (Score:3, Insightful)
Programming is hard (Score:3, Insightful)
Re:Nope. (Score:3, Insightful)
Here we use it quite extensively because it's the best way to handle parallel code and shared data structures.
It's probably the most useful things in Java 5 & 6, and the least used.
Functional programmers: lazy/stupid (Score:1, Insightful)
On a more general note I don't know why people have taken to evangelising functional programming for this discussion. Functional languages are particularly not the solution for parallel programming. People are too dumb or lazy to write explicitly parallel code and think if they write everything in a functional language the compiler will handle that messy business for them. Wrong! As soon as you get to something moderately complex like matrix multiplication functional languages will fail to find a good strategy. There is no way in hell a compiler can figure out an optimal way to parallelise matrix multiplication for an arbitrary architecture because the optimal matrix blocking and memory access patterns depend on the processors cache design. The ATLAS project achieves this to some extent but it is a highly specialised 'compiler' if you can call it that, and needs to know the cpu type and cache sizes/organisation to work optimally.