Is Parallel Programming Just Too Hard? 680
pcause writes "There has been a lot of talk recently about the need for programmers to shift paradigms and begin building more parallel applications and systems. The need to do this and the hardware and systems to support it have been around for a while, but we haven't seen a lot of progress. The article says that gaming systems have made progress, but MMOGs are typically years late and I'll bet part of the problem is trying to be more parallel/distributed. Since this discussion has been going on for over three decades with little progress in terms of widespread change, one has to ask: is parallel programming just too difficult for most programmers? Are the tools inadequate or perhaps is it that it is very difficult to think about parallel systems? Maybe it is a fundamental human limit. Will we really see progress in the next 10 years that matches the progress of the silicon?"
Nope. (Score:2, Insightful)
What's hard, is trying to write multi-threaded java applications that work on my VT-100 terminal.
Re:Nope. (Score:5, Interesting)
It is not difficult to justify parallel programming. Ten years ago, it was difficult to justify because most computers had a single processor. Today, dual-core systems are increasingly common, and 8-core PC's are not unheard of. And software developers are already complaining because it's "too hard" to write parallel programs.
Since Intel is already developing processors with around 80 cores [intel.com], I think that multi-core (i.e. multi-processor) processors are only going to become more common. If software developers intend to write software that can take advantage of current and future processors, they're going to have to deal with parallel programming.
I think that what's most likely to happen is we'll see the emergence of a new programming model, which allows us to specify an algorithm in a form resembling a Hasse diagram [wikipedia.org], where each point represent a step and each edge represents a dependency, so that a compiler can recognize what can and cannot be done in parallel and set up multiple threads of execution (or some similar construct) according to that.
Re:Nope. (Score:5, Interesting)
This is more-or-less how functional programming works. You write your program using an XML-like tree syntax. The compiler utilizes the tree to figure out dependencies. See http://mitpress.mit.edu/sicp/full-text/book/book-
Re: (Score:3, Interesting)
I remember when I was around 10 discovering that this wasn't how procedure calls w
Re:I blame the tools (Score:4, Informative)
Re:I blame the tools (Score:5, Interesting)
I did some Assembly and some C, but the kicker language for this chip was called Occam II. Among other things, it used the indentation in the code to determine block structure. A quick example:
PAR
step A
step B
step C
SEQ
step D
step E
In this example, steps A, B and C would all be executed in parallel with another task which ran step D then step E. If you had one Transputer in your machine, it would multi-task. If you had multiple CPU's available, it would spread the task across the CPU's.
It also has a basic construct called a Channel. These were very easy to set up and use. These were how the different tasks communicated with each other.
It was not difficult to spawn thousands of tasks, each one doing a relatively small part of an overall task, with full communication and synchronization. Again, if you had multiple CPU's available, it would spread the tasks across them. A board with multiple Transputers was usually doing ray-tracing or rendering Mandelbrot fractals as a demo anytime we went to a trade or tech show. They could knock it down to one processor, and things got done relatively quickly. Then, they'd kick in 4 or 16 CPU's and blow people's minds.
This was in 1990. A 386DX-33 was high-end, back then. The Transputer didn't run DOS or Windows, so it didn't survive in the market of the time. That was a shame; I benchmarked a variety of them, then ran identical benchmarks on various other machines as technology marched on. A T805 running 30 MHz (the top end Transputer I ever got to play with) blasted through mixed integer/floating-point calculations about as fast a 486DX2-66 (which didn't come on the market for another couple years). There was an occasion where I had 16 of those T805's sitting my machine. You'd need a Pentium II to be able to match that occasion. It was well over a decade later that the P-II became available.
Cool tech, but the programming tools were what allowed you to really use the parallelization. It was typical to achieve over 95% linear speedup (i.e. 20 CPU's gave real-world 19x performance); sometimes we went over 99%. Most Intel SMP machines are lucky if they give 80% linear speedup (4 CPU's = 3.2x total performance).
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
Re:Nope. (Score:5, Interesting)
Re: (Score:3, Insightful)
The iLife suite. Especially iMovie. And let's not forget the various consumer and professional incarnations of Photoshop--none of them support more than two cores.
Re:Nope. (Score:4, Informative)
Re:Nope. (Score:4, Insightful)
That's difficult to say because true functional programming is so vastly different. We have so much time and energy invested in imperative algorithms that it's difficult to know whether or not functional methods are easier or more difficult to design.
In a sense, it's like saying that Hybrid Synergy Drive is simpler than a traditional transmission. It's true on a conceptual level, but Toyota hasn't tried to put HSD everywhere it has put a traditional transmission and therefore we may not fully understand the complexities of trying to extrapolate the results to the entire problem space.
So, I think the bottom line is, functional programming probably wouldn't be any harder if it existed in a world where it was dominant.
Remember, a large part of being an effective programmer is understanding how (and if) your problem has been solved before. It may be far from optimal, but CPUs are largely designed to look like they are executing sequential instructions. Multicore and multithreaded designs are changing that model, but it's not a change that happens overnight.
Re: (Score:3, Insightful)
Agreed. I just replaced my old machine with a dual-core machine a few months ago, and I find that I basically always have one cpu idle. When things are slow, they're usually slow because they're waiting on I/O. At one point I thought I had a killer app, which was ripping MP3s from CDs. I would
Re:Nope. (Score:5, Insightful)
Closer is this: After some more work and a rewrite (for other reasons), I had "Fracked" running n threads, each rendering 1/n of the display. Data parallelism == easy parallelism.
But a lot of problems don't fit these models, and need a LOT of thought put into how to parallelize them. It's likely that some problems in P are not efficiently parallelizable.
Re:Nope. (Score:4, Insightful)
The reason the problems don't fit these models is moreso that we're used to thinking about algorithms as an ordered list of steps, rather than a set of workers on an assembly line (operating as fast as the slowest individual worker).
Re:Nope. (Score:4, Informative)
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
for me, deadlock is a situation where several objects compete for acquisition to a set of resources
Well the wikipedia article certainly alludes to that definition (http://en.wikipedia.org/wiki/Deadlock [wikipedia.org]) but the article may be general and not pertain to parallel computing (there is no mention of parallel computing in that article). I thought what you're describing fell into "race condition". The IBM Redbook on MPI (Page 26 of http://www.redbooks.ibm.com/abstracts/sg245380.htm l [ibm.com]) explains deadlock as follows (pages 26 [ibm.com], 27 [ibm.com], 28 [ibm.com]):
When two processes need to exchange data with each other, you have to be careful about deadlocks. When a deadlock occurs, processes involved in the deadlock will not proceed any further. Deadlocks can take place either due to incorrect order of send and receive...
which is what I said in the last post
...,or due to the limited size of the system buffer
The first pseudocode on page 27 basica
Re: Nope. (Score:4, Informative)
The real question then, is: Is it justified? To be honest, for most programs, the answer is no. Most interactive programs have a CPU-time/real-time ratio of a lot less than 1% during their lifetime (and very likely far less than 10% during normal, active use), so any difference brought by parallelizing them won't even be noticed. Other programs, like compilers, don't need to be parallelized, since you can just run "make -j8" to use all of your 8 cores at once. I would also believe that there are indeed certain programs that are rather hard to parallelize, like games. I haven't written a game in a quite a long time now, and I don't know the advances that the industry has made as of late, but a game engine's step cycle usually involves a lot of small steps, where the execution of the next depends on the result of the previous one. You can't even coherently draw a scene before you know that the state of all game objects has been calculated in full. Not that I'm saying that it isn't parallelizable, but I would think it is, indeed, rather hard.
So where does that leave us? I, for one, don't really see a great segment of programs that really need parallelizing. There may be a few interactive programs, like movie editors, where the program logic is heavy enough for it to warrant a separate UI thread to maintain interactive responsiveness, but I'd argue that segment is rather small. A single CPU core is often fast enough not to warrant parellelizing even many CPU-heavy programs. There definitely is a category of programs that do benefit from parellelization (e.g. database engines which serve multiple clients), but they are often parellelized already. For everyone else, there just isn't incentive enough.
Nitpick (Score:3, Insightful)
The language isn't called LabView, it's called "G". LabView is the whole package (G interpreter/compiler, drivers, libraries, etc).
And, yeah, G is pretty cool. National Instruments offers a slightly dated, but otherwise completely functional version of LabView for free for noncommercial use.
Re: (Score:3, Informative)
CC.
Re:Nope. (Score:4, Funny)
Re:Nope. (Score:5, Insightful)
There is a very real limit as to how much you can parallelize standard office tasks.
Re: (Score:3, Insightful)
The truth is languages such as C/C++ and Java(to lesser extent) are not good languages to write parallel code in. They do not have the constructs built in such as process communication links, mutexes, semaphores that parallel programs rely on. You end up writing aracne code to get this sort of fu
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
Here we use it quite extensively because it's the best way to handle parallel code and shared data structures.
It's probably the most useful things in Java 5 & 6, and the least used.
Lack of skill in the field (Score:5, Informative)
Parallel programming does add a layer of complexity and its inherent lack of general solutions does make abstracting its complexity away difficult, but I suspect that the biggest issue is the lack of training of the work force. It isn't something you can pick up easily without a steep learning curve with many hard lessons in it, definitely not something that can be incorporated as a new thing to be learnt on the fly with deadlines looming.
Another aspect is that its fundamental to the design, parallelism can and often will dictate the design and if the software architects are not actively designing for it or are not experienced enough to ensure that it remains a viable future option, future attempts to parallelise can be difficult at best.
Ultimately the key issues are:
* Lack of skill and training in the work force (including the understanding that there are no general solutions)
* Lack of mature development tool to easy the development process (debugging tools especially)
Re: (Score:3, Interesting)
You may be shocked then to discover that a substantial percentage of the lecturers and professors in computer science programs at American (and probably foreign as well) universities have little or no *practical* experience in programming large multithreaded applications as well (they know what it is of course and they wave their hands while describing it but the programming details are often le
Re:A different approach to parallel programming (Score:4, Insightful)
Basically like a compiled function?
Re:A different approach to parallel programming (Score:5, Informative)
Maybe I'm a "bit thick", but that doesn't make any sense to me. It's an interesting idea, but I just don't see how it'd help parrallelize things. At the very least, it seems to be solving the wrong problem.
The biggest problem right now is that it's really hard to split most tasks into parts that can be performed at the same time. Once a parrallel algorithm is devised, it's relatively easy to write a program that performs the task in parrallel.
Also, I don't know what you mean about compilers being stuck in the 70s. There have been massive improvements to compilers in the last 40 years.
But programming doesn't work like that. Individual characters in a programming language are almost irrelevant.
Re:A different approach to parallel programming (Score:4, Interesting)
I think you're reasoning from some notion about the way "Oriental" people think differently from "Western" people. I tend to doubt that idea based on the kinds of people who have enthusiastically pushed it: originally imperialist racists of various groups, each bent on proving the superiority of their group, and more recently western PC racists who compulsively idealize everything non-Western. Despite the taint of racism, the idea may have some basis in fact as well -- there is ongoing research that occasionally manages to produce some evidence for it -- but the sad fact is that we have only been able to create programming languages that express a very tiny subset of the way "Western" people supposedly think anyway. The problem is not a lack of nonlinear, context-sensitive ways of thinking; the problem is that before we can use a given way of thinking to communicate with a computer, we must essentially enable the computer to think the same way. If you buy into the western PC version of the dualism, "Oriental = nonlinear, inclusive, sensitive, flexible, context-sensitive; Western = linear, exclusive, autistic, rigid, blinkered," then digital computers are quintessentially Western beings that cannot be made to appreciate Eastern ways of thinking, at least not without a few more decades of AI research and performance improvements.
The Chinese government might very well be hard at work creating a quintessentially Chinese programming language, but it's a bad idea to pin your hopes on political science. It tends to suck. On top of that, many excellent programming languages have been doomed by much smaller barriers to entry than learning an entirely new system of writing. On top of that, your 2D array of characters is doomed by the multitudes of multicharacter words in Chinese. To add yet more on top of that, another poster just pointed out that the idea that it expresses has already been expressed in other languages using ASCII.
I wouldn't have bothered piling on you like this if your post didn't strike me as racist. The commonly accepted story about the differences between Eastern and Western ways of thinking is propagated by uninformed repitition. Chinese, Americans, left-wingers, right-wingers, everybody has learned to love it and interpret it to flatter their side, so they all repeat it in unison. It pollutes the discourse. Wouldn't it be nice if everyone who didn't have firsthand experience just shut the hell up? Then we might hear something different from the standard story that gets passed around like a centuries-old fruitcake. Or we might hear the same thing, but then at least it would mean something.
Re: (Score:3, Interesting)
I see neither a connection between the choice of character set and parallel programming, nor anything in your post that is beyond trivial to do in ASCII:
- For vertical sid
A better approach to parallel programming (Score:3, Insightful)
our brains aren't wired to think in parallel (Score:5, Insightful)
I think the biggest reason why it is difficult is that people tend to process information in a linear fashion. I break large projects into a series of chronologically ordered steps and complete one at a time. Sometimes if I am working on multiple projects, I will multitask and do them in parallel, but that is really an example of trivial parallelization.
Ironically, the best parallel programmers may be those good managers, who have to break exceptionally large projects into parallel units for their employees to simultaneously complete. Unfortunately, trying to explain any sort of technical algorithm to my managers usually exacts a look of panic and confusion.
Re:our brains aren't wired to think in parallel (Score:5, Informative)
That said, parallel processing is hardly a holy grail. On one hand, everything is parallel processing (you are reading this message in parallel with others, aren't you?). On the other, when we are talking about a single computer running a specific program, parallel usually means "serialized but switched really fast". At most there is a handful of processing units. That means, that whatever it is you are splitting among these units has to give itself well to splitting this number of ways. Do more - and you are overloading one of them, do less - and you are underutilizing resources. In the end, it would be easier often to do processing serially. Potential performance advantage is not always very high (I know, I get paid to squeeze this last bit), and is usually more than offset by difficulty in maintenance.
Re: (Score:3, Insightful)
I don't know about the rest of Slashdot, but I read comments in a linear fashion - one comment, then the next comment, etc. Most people that I have known read in a linear fashion.
Walking and chewing gum is a parallel process. Reading is generally linear.
Re:our brains aren't wired to think in parallel (Score:5, Insightful)
Every process is serial from a broad enough perspective. Eight hypothetical modems can send 8 bits per second. Or are they actually sending a single byte?
Re: (Score:3, Insightful)
It could also be stated in a twisted manner of your view - looked at narrowly enough, anything can be considered to be parallel. However, realistically, we know that isn't really the case.
That's irrelevant. (Score:5, Insightful)
Our cognitive system does many things at the same time, yes. That doesn't answer the question that's being posed here: whether explicit, conscious reasoning about parallel processing is hard for people.
Re:That's irrelevant. (Score:4, Interesting)
Well, that statement makes a gross assumption. Every software developer (programmer/engineer) thinks that their particular domain is representative of what computer programmers do in general. However, if you in fact write software that automates common business tasks, the software is directly analogous to the process it seeks to model and/or replace. Most business tasks are sequential, so procedural single-threaded programming is a perfectly fine model to use.
Yes, for the things that most casual users and computer scientists think are "interesting," there is some inherent level of parallelism that can get pretty high. For a few choice types of tasks (ray tracing, rendering certain fractals), the problem itself is embarrassingly parallel because there is no direct coupling between the solutions of sub-problems. However, at some level you can't decompose your problem any further, and you can extract no more parallelism.
The vast majority of computer software these days is written for businesses. Most of it is not the stuff you see on the shelf at the local computer store, but is written custom to solve a specific business need. Most business processes are inherently serial operations: you perform one step, then you perform another step that depends on the previous step. Occasionally, you get lucky and you discover multiple steps that can be done simultaneously. However, making such processes explicitly parallel might not be the most advantageous move; after all, most modern CPU architectures are adept at out-of-order execution, and can analyze instruction streams to figure out dependencies dynamically. Heck, most modern CPU architectures support multiple in-flight instructions, and allow multiple instructions to complete simultaneously, which extracts parallelism at a very low level.
You can still explicitly use this technique in modern programming languages; Java, for example, has Thread.join() which is used for precisely such situations. However, just because you can do something doesn't mean that you should; there is overhead associated with spawning threads of execution and synchronizing threads when some result is needed. If the computation being performed by a function call is long-lived, then it makes sense to spawn a thread to perform that computation -- assuming that there are sufficient computational resources to truly run that thread concurrently (e.g., another CPU core able to perform that computation). Otherwise, you're burning more computational resources and probably making your code actually slower in the process (due to management overhead). And if you're multitasking on a single CPU core, spawning another thread will almost certainly result in a slower-running program (because you still have all the overhead of managing another thread, but none of the benefit of true hardware-level concurrency).
The first sentence is really an unsupported conjecture. The second sentence is an attempt to provide anecdotal evidence drawn from your own experiences to support the conjecture in the first. My own life experience is vastly different from yours, but then again, that too is anecdotal evidence; neither your nor my personal experiences are really "proof" o
Actually it's just pipelined (Score:5, Insightful)
Try reading two different texts side by side, at the same time, and it won't work that neatly parallel any more.
Heck, there were some recent articles about why most Powerpoint presentations are a disaster: in a nutshell, because your brain isn't that parallel, or doesn't have the bandwidth for it. If you try to read _and_ hear someone saying something (slightly) different at the same time, you just get overloaded and do neither well. The result is those time-wasting meetings where everyone goes fuzzy-brained and forgets everything as soon as the presentation flipped to the next chart.
To get back to the pipeline idea, the brain seems to be quite the pipelined design. Starting from say, the eyes, you just don't have the bandwidth to consciously process the raw stream of pixels. There are several stages of buffering, filtering out the irrelevant bits (e.g., if you focus on the blonde in the car, you won't even notice the pink gorilla jumping up and down in the background), "tokenizing" it, matching and cross-referencing it, etc, and your conscious levels work on the pre-processed executive summary.
We already know, for example, that the shortest term buffer can store about 8 seconds worth of raw data in transit. And that after about 8 seconds it will discard that data, whether it's been used or not. (Try closing your eyes while moving around a room, and for about 8 seconds you're still good. After that, you no longer know where you are and what the room looks like.)
There's a lot of stuff done in parallel at each stage, yes, but the overall process is really just a serial pipeline.
At any rate, yeah, your eyes may already be up to 8 seconds ahead of what your brain currently processes. It doesn't mean you're that much of a lean, mean, parallel-processing machine, it just means that some data is buffered in transit.
Even time-slicing won't really work that well, because of that (potential) latency and the finite buffers. If you want to suddenly focus on another bit of the picture, or switch context to think of something else, you'll basically lose some data in the process. Your pipeline still has the old data in it, and it's going right to the bit bucket. That or both streams get thrashed because there's simply not enough processing power and bandwidth for both to go through the pipeline at the same time.
Again, you only need to look at the fuzzy-brain effect of bad Powerpoint presentations to see just that in practice. Forced to try to process two streams at the same time (speech and text), people just make a hash of both.
Re:Actually it's just pipelined (Score:4, Funny)
There's way more to it than that. The brain is an efficient organizer and sorter. It also tends to be great at estimating the relative importance of things and discarding the lesser ones it can't deal with. Thus, 99.999999999% of the time, I ignore Powerpoint presentations. It goes in my eyes, the brain deciphers the signal, decides that the Powerpoint stuff is useless drivel, and it continues processing the audio (in parallel!) and terminates the visual processing thread. Shortly thereafter, the audio signal is also determined to be of minimal benefit and is discarded as useless drivel as well, leaving more processing time for other things. Things like pondering the answer to the age-old question, "What's for lunch?"
Re: (Score:3, Funny)
Re: (Score:3, Interesting)
Addressing architecture for Brain-like Massively Parallel Computers [acm.org]
or from a brain-science perspective
Natural and Artificial Parallel Computation [mit.edu]
The common tools (Java, C#, C++, Visual Basic) are still primitive for parallel programming. Not much more than semaphores and some basic multi-threading code (start/stop/pause/communicate from one thread to another v
Re: (Score:3, Insightful)
We don't think in recursion either (Score:5, Interesting)
I suspect that parallel programming may be similar - some programmers will "get it", others won't. Those who "get it" will find it fun and easy and be unable to understand why everyone else finds it hard.
Also, most developement tools were created with a single processor system in mind: IDEs for parallel programming are a new-ish concept and there are few. As more are developed we'll learn about how the computer can best help the programmer to create code for a parallel system, and the whole process can become more efficient. Or maybe automated entirely; at least in some cases, if the code can be effectively profiled the computer may be able to determine how to parallelize it and the programmer may not have to worry about it. So, I think it's premature to argue about whether parallel programming is hard or not - it's different, but until we have taken the time to further develop the relevant tools, we won't know if it's really hard or not.
And of course, for a lot of tasks it simply won't *matter* - anything with a live user sitting there, for example, only has to be fast enough that the person perceives it as being instantaneous. Any faster than that is essentially useless. So, for anything that has states requiring user input, there is a "fast enough" beyond which we need not bother optimizing unless we're just trying to speed up the system as a whole, and that sort of optimization is usually done at the compiler level. It is only for software requiring unusually large amounts of computation or for systems which have been abstracted to the point of being massively inefficient beneath the surface that the fastest possible computing speed is really required, and those are the sorts of systems to which specialist programmers could be applied.
Re: (Score:3, Insightful)
It goes to show to become a better programmer investigate as many programming paradigms as possible.
Re: (Score:3, Interesting)
Sure. Short-term we could learn to do a lot of simple tasks better in parallell. Drawing the circle and square at the same time is hard, but it gets a lot easier even with just a few hours of practice.
Longer term, we'd *evolve* better handling of parallellism if it gave us significant survival-benefits (well, really reproductive-benefits, but you get the idea)
When we *do* manage many things at a time it is mostly by practicing them to the point where as much as possible about them become automatic, "mu
Re:our brains aren't wired to think in parallel (Score:4, Interesting)
While everything perhaps can't be solved using monte carlo type integration tricks
Re:our brains aren't wired to think in parallel (Score:5, Insightful)
The first problem is that people who make good managers make lousy coders. The second problem is that people who make good coders make lousy managers. The third problem is that plenty of upper management types (unfortunately, I regard the names I could mention as genuinely dangerous and unpredictable) simply have no understanding of programming in general, never mind the intricacies of parallelism.
However, resource management and coding is not enough. These sorts of problems are typically either CPU-bound and not heavy on the networks, or light on the CPU but are network-killers. (Look at any HPC paper on cascading network errors for an example.) Typically, you get hardware which isn't ideally suited to either extreme, so the problem must be transformed into one that is functionally identical but within the limits of what equipment there is. (There is no such thing as a generic parallel app for a generic parallel architecture. There are headaches and there are high-velocity exploding neurons, but that's the range of choices.)
Re:our brains aren't wired to think in parallel (Score:5, Insightful)
You may want to look into Erlang, which does two things that will interest you:
There are still concurrent problems which are hard, but generally it boils down to the problem being hard instead of the language making the problem harder to express.
Re: (Score:3, Interesting)
Yeah, I've been doing tutorials and bought the beta PDF edition of Joe Armstrong's book, but at one point I stopped to go back and do The Little Schemer to refresh my ability to frame everything in terms of recursion on lists. Haven't gotten back to Erlang yet, but I've been
Re:our brains aren't wired to think in parallel (Score:5, Insightful)
Re:our brains aren't wired to think in parallel (Score:5, Funny)
Re: (Score:3, Insightful)
The problem isn't that our brains our consciousness is unable to do this, however. All programming involves interactions between details that potentially is too much for us to handle. Good programming is limiting interactions to a small number of well defined interfaces, and very good programming reduces the amount of context you have to consider to what you can visually scan in a few seconds.
The problem is that the paradigms f
Re: (Score:3, Informative)
Didn't your mother ever teach you not to?
On a serious note; how conscious are you when you talk? Do you consciously trigger all required muscle movement or does you brain queue a "say 'word'" command which is then processed unconsciously? Probably neither, but the latter is likely closer to reality. There is actually VERY little that we do consciously.
Re:our brains aren't wired to think in parallel (Score:5, Interesting)
True, but can you talk (perhaps reciting something from memory) at the same time you are listening to something? Even if it's not a volume issue; you're wearing headphones say.
Feynman has a chapter in "What do you care what other people think" where he talks about some informal experimentation he did where he tried to figure out what he could do at the same time as accurately timing out a minute. Essentially, the same time as counting. He found that (1) he could be very consistent about timing out a given time, and that (2) he could do most things while counting. But what he couldn't do is talk. On discussion with other people in his dorm/frat/house/whatever, there was another person who could talk, but couldn't read while timing things out. Turns out that the reason it differed was because they counted differently; Feynman was hearing "one, two, three,
Activities are localized in the brain; it seems that these areas are largely independent, but try two tasks that use the same area and you're SOL.
Re: (Score:3, Interesting)
Two words: map-reduce (Score:3, Interesting)
Re:Two words: map-reduce (Score:5, Informative)
Even though it is implemented in Java, you can use just about anything with it, using the Hadoop streaming [apache.org] functionality.
Re: (Score:3, Insightful)
(Score:4, Interesting)
by Anonymous Coward on 07-05-29 6:29 (#19305183)
Implement it, add CPUs, earn billion$. Just Google it.
Wow, fantastic. Turns out parallel programming was really really easy after all. I guess we can make all future computer science courses one week then, since every problem can be solved by map-reduce! The AC has finally found the silver bullet. Bravo, mods.
Re: (Score:3, Funny)
What do you mean? I just wrote a clone of Half-Life 2 using only Map-Reduce. It works great, but it requires 600 exabytes of pre-computed index values and data sets and about 200 high end machines to reach decent frame rates.
Have some friggin' patience (Score:4, Insightful)
Oh noes! Software doesn't get churned out immediately upon the suggestion of parallel programming! Programmers might actually be debugging their own code!
There's nothing new here: just somebody being impatient. Parallel code is getting written. It is not difficult, nor are the tools inadequate. What we have is non-programmers not understanding that it takes a while to write new code.
If anything, that the world hasn't exploded with massive amounts of parallel code is a good thing: it means that proper engineering practice is being used to develop sound programs, and the jonny-come-lately programmers aren't able to fake their way into the marketplace with crappy code, like they did 10 years ago.
Patience is not a goal here. (Score:3, Insightful)
It's not trivial, and often not necessary (Score:5, Interesting)
And often enough, it's far from necessary. Unless you're actually dealing with an application that does a lot of "work", calculate or display, preferable simultanously (games would be one of the few applications that come to my mind), most of the time, your application is waiting. Either for input from the user or for data from a slow source, like a network or even the internet. The average text processor or database client is usually not in the situation that it needs more than the processing power of one core. Modern machines are by magnitudes faster than anything you usually need.
Generally, we'll have to deal with this issue sooner or later, especially if our systems become more and more overburdened with "features" while the advance of processing speed will not keep up with it. I don't see the overwhelming need for parallel processing within a single application for most programs, though.
Not justifyable (Score:4, Interesting)
Coder: We need more time to make this game multithreaded!
PHB: Why? Can it run on one core of a X?
Coder: Well I suppose it can but...
PHB: Shove it out the door then.
If flight simulator X is any indication (a game that should have been easy to parallize) this conversation happens all the time and games are launched taking advantage of only one core.
Re:Not justifyable (Score:5, Insightful)
If your programmers are telling you they need more time to turn a single-threaded game into a multi-threaded one, then the correct solution IS to push the game out the door, because it won't benefit performance to try to do it at the end of a project. It's a fundamental design choice that has to be made early on.
Re: (Score:3, Insightful)
Coder: Well I suppose it can but...
PHB: Shove it out the door then.
I think the PHB may have a point. Games are expensive to make, gamers are fickle, and if reviews say a game "looks dated" it is a death sentence for sales. Rewriting an engine to be multi-threaded after it is done will take a lot of time and money, and most PCs out there have more dated hardware than you think, so most current players won't see any benefints whatsoever from it anyway.
Re: (Score:3, Informative)
Are Serial Programmers Just Too Dumb? (Score:4, Interesting)
Why isn't there a mass stampede to Erlang or Haskell, languages that address this problem in a serious way? My conclusion is that most programmers are just too dumb to do major mind-bending once they've burned their first couple languages into their ROMs.
Wait for the next generation, or make yourself above average.
Re: (Score:3, Interesting)
Then dot.com blew up and they were dead in the water. Today you need a fraction of the PHP artists that were sought after in 2000. So they picked up C, in
Programmers (Score:2, Insightful)
No need to worry about memory management, java will do it for you.
No need to worry about data location, let the java technology of the day do it for you.
No need to worry about how/which algorithm you use, just let java do it for you, no need to optimize your code.
Problem X => Java cookbook solution Y
Parallel Language... (Score:3, Insightful)
Yes, difficult, but our brains are not limited. (Score:2, Insightful)
Programmers that are accustomed to non-parallel programming environments forget to think about the synchronization issues that come up in parallel programming. Several conventional programs do not take into account synchronization of the shared memory or message passing requirements that come up for these programs to work correctly in a parallel environment.
This is not to say that there will not be an
Clusters? (Score:4, Insightful)
Funny, I've seen an explosion in the number of compute clusters in the past decade. Those employ parallelism, of differing types and degrees. I guess I'm not focused as much on the games scene - is this somebody from the Cell group writing in?
I mean, when there's an ancient Slashdot joke about something there has to be some entrenchment.
The costs are just getting to the point where lots of big companies and academic departments can afford compute clusters. Just last year the price of multi-core CPU's made it into mainstream desktops (ironically, more in laptops so far). Don't be so quick to write off a technology that's just out of its first year of being on the desktop.
Now, that doesn't mean that all programmers are going to be good at it - generally programmers have a specialty. I'm told the guys who write microcode are very special, are well fed, and generally left undisturbed in their dark rooms, for fear that they might go look for a better employer, leaving the current one to sift through a stack of 40,000 resumes to find another. I probably wouldn't stand a chance at it, and they might not do well in my field, internet applications - yet we both need to understand parallelism - they in their special languages and me, perhaps with Java this week, doing a multithreaded network server.
Yes and No (Score:5, Interesting)
What we need is more advanced primitives. Here are my 2 or 3 top likely suspects:
- Concurrent Sequential Programs - CSP. This is the programming model behind Erlang - one of the most successful concurrent programming languages available. Writing large, concurrent, robust apps is as simple as 'hello world' in Erlang. There is a whole new way of thinking that is pretty much mind bending. However, it is that new methodology that is key to the concurrency and robustness of the end applications. Be warned, it's functional!
- Highly optimizing functional languages (HOFL) - These are in the proto-phase, and there isn't much available, but I think this will be the key to extremely high performance parallel apps. Erlang is nice, but not high performance computing, but HOFLs won't be as safe as Erlang. You get one or the other. The basic concept is most computation in high performance systems is bound up in various loops. A loop is a 'noop' from a semantic point of view. To get efficient highly parallel systems Cray uses loop annotations and special compilers to get more information about loops. In a functional language (such as Haskel) you would use map/fold functions or list comprehensions. Both of which convey more semantic meaning to the compiler. The compiler can auto-parallelize a functional-map where each individual map-computation is not dependent on any other.
- Map-reduce - the paper is elegant and really cool. It seems like this is a half way model between C++ and HOFLs that might tide people over.
In the end, the problem is the abstractions. People will consider threads and mutexes as dangerous and unnecessary as we consider manual memory allocation today.
Re: (Score:3, Interesting)
Technically speaking, Erlang has more in common with the Actor model than with CSP. The Actor model (like Erlang) is based on named actors (cf Erlang pids) that communicate via asynchronous passing of messages sent to specific names. CSP is based on anonymous processes that communicate via synchronized passing of messages sent through names channels. Granted, you can essentially simulate one model within the other. But it pays
It's still the wild west... (Score:3, Interesting)
I think part of the problem is, that many programmers tend to be lone wolves, and having to take other people (and their code, processes, and threads) into consideration is a huge psychological hurdle.
Just think about traffic: If everyone were to cooperate and people wouldn't cut lanes and fuck around in general we'd all be better off. But traffic laws are still needed.
I figure what we really need is to develop proper guidelines and "laws" for increased parallelism.
Disclaimer: This is all totally unscientific coming from the top of my head...
Yes, because programmers are too conservative (Score:5, Insightful)
No, the different sorts of paradigms I'm talking about no shared state, message passing concurrency models ala CSP [usingcsp.com] and pi Calculus [wikipedia.org] and the Actor Model [wikipedia.org]. That sort of approach in terms of how to think about the problem shows up in languages like Erlang [erlang.org], and Oz [wikipedia.org] which handle concurrency well. The aim here is to make message passing and threads lightweight and integrated right into the language. You think in terms actors passing data, and the language supports you in thinking this way. Personally I'm rather fond of SCOOP for Eiffel [se.ethz.ch] which elegantly integrates this idea into OO paradigms (an object making a method call is, ostensibly, passing a message after all). That's still research work though (only available as a preprocessor and library, with promises of eventually integrating it into the compiler). At least it makes thinking about concurrency easier, while still staying somewhat close more traditional paradigms (it's well worth having a look at if you've never heard of it).
The reality, however, is that these new languages which provide the newer and better paradigms for thinking and reasoning about concurrent code, just aren't going to get developer uptake. Programmers are too conservative and too wedded to their C, C++, and Java to step off and think as differently as the solution really requires. No, what I expect we'll get is kluginess retrofitted on to existing languages in a slipshod way that sort of work, in as much as it is an improvement over previous concurrent programming in that language, but doesn't really make the leap required to make the problem truly significantly easier.
Amdahl's law (Score:5, Insightful)
Now there's been lots of work on eliminating those single-threaded bits in our algorithms, but every new software problem needs to be analyzed anew. It's just another example of the no-silver-bullet problem of software engineering...
Too much emphasis on instruction flow (Score:4, Interesting)
Multi-threaded code is hard. Keeping track of locks, race conditions and possible deadlocks is a bitch. Working on projects with multiple programmers passing data across threads is hard (I remember one problem that took days to track down where a programmer passed a pointer to something on his stack across threads. Every now and then by the time the other thread went to read the data it was not what was expected. But most of the time it worked).
At the same time we are passing comments back and forth here on Slashdot between thousands of different processors using a system written in Perl. Why does this work when parallel programming is so hard?
Traditional multi-threaded code places way too much emphasis on synchronization of INSTRUCTION streams, rather than synchronization of data flow. It's like having a bunch of blind cooks in a kitchen and trying to work it so that you can give them instructions so that if they follow the instructions each cook will be in exactly the right place at the right time. They're passing knives and pots of boiling hot soup between them. One misstep and, ouch, that was a carving knife in the ribs.
In contrast, distributed programming typically puts each blind cook in his own area with well defined spots to use his knives that no one else enters and well defined places to put that pot of boiling soup. Often there are queues between cooks so that one cook can work a little faster for a while without messing everything up.
As we move into this era of cheap, ubiquitous parallel chips we're going to have to give up synchronizing instruction streams and start moving to programming models based on data flow. It may be a bit less efficient but it's much easier to code for and much more forgiving of errors.
Re: (Score:3, Interesting)
bad education (Score:4, Insightful)
These same programmers often think that ideas like "garbage collection", "extreme programming", "visual GUI design", "object relational mappings", "unit testing", "backwards stepping debuggers", and "refactoring IDEs" (to name just a few) are innovations of the last few years, when in reality, many of them have been around for a quarter of a century or more. And, to add insult to injury, those programmers are often the ones that are the most vocal opponents of the kinds of technologies that make parallel programming easier: declarative programming and functional programming (not that they could actually define those terms, they just reject any language that offers such features).
If you learn the basics of programming, then parallel programming isn't "too hard". But if all you have ever known is how to throw together some application in Eclipse or Visual Studio, then it's not surprising that you find it too hard.
Re:bad education (Score:4, Insightful)
Not to be misunderstood: I do think there is a place for good tools to support working programmers, it's just that the tools that are widespread are mainly aimed at getting people "hooked" on them, not at supporting experienced professional programmers optimally.
I think you are misunderstanding the objective of the tools you are naming... they have been conceived to help developers in their work. It is not Microsoft's or Eclipse foundation (or the Gnome guys) that wannabe "code monkeys" play with their tools for 2 weeks and add them as a skill in their curriculum. Look at those tools as if they were carpenter tools like an electric drill or handsaw. They are conceived to make easier the work of the carpenter, but yo *do* need to *know* what you are doing. You can not say that they hurt the carpenters productivity because they incite people that do not know anything about woodworking to grab a DIY book and put an add on the newspaper...
Two Problems (Score:4, Insightful)
A Case Study (Score:3, Informative)
I think this is a temporary situation though, and something that has happened before, there have been many cases where new powerful hardware seeps into the mainstream before programmers are prepared to use it.
*I know what you're thinking: "How the hell do you spend $15,000 on a Mac?". I wouldn't have thought it was possible either, but basically all you have to do is buy a Mac with every single option that no sane person would buy: max out the overpriced RAM, buy the four 750GB hard drives at 100% markup, throw in a $1700 Nvidia Quadro FX 4500, get the $1K quad fibre channel card, etc.
The Bill comes due (Score:5, Insightful)
What kind of parallel programming? (Score:4, Insightful)
Are we talking about situations that lend themselves to data flow techniques? Or Single Instruction/Multiple Data (e.g. vector) techniques? Or problems that can be broken down and distributed, requiring explicit synchronization or avoidance?
I agree with other posters (and I've said it elsewhere on
Then there's the related issue of long-lived computation. Many of the interesting problems in the real-world take more than a couple of seconds to run, even on today's machines. So you have to worry about faults, failures, timeouts, etc.
One place to start for distributed processing kinds of problems, including fault tolerance, is the work of Nancy Lynch of MIT. She has 2 books out (unfortunately not cheap), "Distributed Algorithms" and "Atomic Transactions". (No personal/financial connection, but I've read & used results from her papers for many years...)
I get the sense that parallelism/distributed computation is not taught at the undergrad level (it's been a long time since I was an undergrad, and mine is not a computer science/software engineering/computer engineering degree.) And that's a big part of the problem...
dave
Re: (Score:3, Interesting)
Ada95 is a -much better- choice for this kind of programming, with a rich set of concurrency primitives (for both synchronization (rendezvous) and avoidance (protected objects), integrated into both a strong typing system and an object-oriented approach. And there's a good compliler in the GNU compiler family.
But programming language research seems to have been abandoned over the last 20 ye
Not too hard at all (Score:4, Interesting)
The real probelm is this: I've seen time and again that the real problem is that most companies do not require, recognise, or reward high technical skill, experience and ability instead they favour minimal cost and speed of product delivery times over quality.
Also it seems most Human Resources staff/agents don't have the necessary skills to actually identify skilled Software Developers compared to useless Software Developers that have a few matching buzzwords on their resume, because they themselves don't understand enough to ask the right questions so resort to resume-keyword matching.
The consequence is that the whole notion of Software Development being a skilled profession is being undermined and devalued. This is allowing a vast amount of people to be employed as Software Developers that don't have the natural ability and/or proper training to do the job.
To those people, parallel programming IS hard. To anyone with some natural ability, the proper understanding of the issues (you get from say a BS Computer Science degree) and a naturally rigorus approach, no it really isn't.
"Dragged Kicking and Screaming" (Score:5, Interesting)
There was a lot of really good wisdom in there, whether you are writing a game or something else that needs to get every possible performance boost.
I'm sure they probably drew from 20+ years worth of whitepapers (and some newer ones about "lock-free" mutexes, see chapter 1.1 of "Game Programming Gems 6"), but what I walked away from the talk with was the question: "why the hell didn't _i_ think of that?"
There were several techniques they used that, once you built a framework to support it, made parallelizing tasks dirt simple. A lot of it involves putting specific jobs onto queues and letting worker threads pick them up when they are idle, and being able to assign specific jobs to specific cores to protect your investment in CPU cache.
Most of the rest of the work is building things that don't need a result immediately, and trying to build things that can be processed without having to compete for various pieces of state...sometimes easier said than done, sure. But after hearing his talk, I was of the opinion that while parallelism is always more complex than single-threaded code, doing this well is something most developers aren't even _thinking_ about yet. In most cases, we're not even at the point where we can talk about _languages_ and _tools_, since we aren't even using the ones we have well.
--ryan.
No. No. No. (Score:4, Interesting)
But that's not the problem...
The problem is, a multi-year old desktop PC is still doing IM, email, web surfing, Excel, facebook/myspace, and even video playing fast enough a new one won't "feel" any faster once you load up all the software, not one bit. For everyone but the hardcore momma's basement dwelling gamers, the PC on your home/work desk is already fast enough. All the killer apps are now considered low-CPU, bandwidth is the problem.
Now sure, I use 8-core systems at the lab, and sit on top of the 250k-node Folding@home so it sounds like I've lost my mind, but you know what, us super/distributed computing science geeks are still 0% of the computer market if you round. (and we'll always need every last TFLOP of computation on earth, and still need more)
That's it. Simple. Sweet. And a REAL crisis - but only to the bottom line. The market has nowhere to go but lower wattage lower cost systems which means lower profits. Ouch.
Re: (Score:3, Insightful)
You can get a 4 core chip for under $600 now because of it. If you are into high performance computing then you should beg the game developers for something that can use as many cores as you can throw at it. Because as you said, you are 0% of the market, and gamers are a huge chunk of it.
A simple starting point (Score:3, Interesting)
could be to add a especifically parallel iterator keyword to programming languages, ie:
for-every a in list; do something(a); done;
The compiler then assumes that something(a) can be safely executed in parallel for every element in the list.Is not rocket science, it could lead to parallel spagheti, but is a straightforward method to promote parallel programming.
Re: (Score:3, Interesting)
Programming is hard (Score:3, Insightful)
Multics (Score:4, Informative)
Multics was designed for long lived processes. Short lived processes are something we take for granted today but wasn't assumed back then. Today we assume that the sequence is we open a program, perform a task, close the program. Microsoft Outlook, for example, relies on Outlook being closed for its queue when to purge email that's been deleted. Programs are not designed to be up years-on-end. In fact, Microsoft reboots their OS every month with Windows Update. I've often speculated that the reboot is often requested not because the patch requires it but because Microsoft knows that its OS needs rebooted, often.
Why? Why wouldn't one just leave every application you've ever opened, opened?
The reason is that programmers cannot reliably write long running process code. Programs crashed all the time in Multics. Something Multics wasn't very good at handling back in the 1960s. There was some research done and it was observed that programmers could write code for short lived processes fairly well but not long lived.
So, one lessoned learn from from the failure Multics is that programmers do not write reliable, long running code.
Parallel processing is a processing better suited to long running processes. Since humans are not good at writing long running processes it makes sense then that parallel processing is rare. The innovation to deal with this sticky dilemma was the client-server model. Only the server needs to be up for long periods of time. The clients can and should perform short lived tasks and only the server needs to be reliably programmed to run 24/7. Consequently you see servers have clusters, RAID storage, SAN storage and other parallel engineering and clients do not. In some sense, Windows is the terminal they were thinking of back in the Multics days. The twist is that given humans are not very good at writing long running processes then the client OS, Windows, is designed around short lived processes. Open, perform task, close. I leave Firefox open for more than a couple of days and it is using 300MB of memory and slowing my machine down. I close and reopen Firefox daily.
Threads didn't exist in the computing word until Virtual Memory with it's program isolation came to be. What happened before threads? What happened before threads is that programmers were in a shared, non-isolated environment. Where Multics gave isolation to every program, Unix just recognizes two users: root and everyone else. Before Virtual Memory, this meant that all user programs could easily step on each other and programs could bring each down. Which happened a lot.
Virtual Memory is one of the greatest computing advances because it specifically deals with a short coming in human nature. Humans are not very good at programming memory directly, i.e. pointers.
It wasn't very long after VM came out that threads were invented to allow intra-application memory sharing. Here's the irony though. There still as no advancement in getting humans to perform reliable programming. Humans today are still not very good at programming memory directly, even with monitors, semaphores and other OS helpers.
When I was in my graduate OS class the question was raised then of "when do you invoke a thread?" given you probably shouldn't to avoid instability.
The answer was to think about threads then as "light weight processes". The teaching was that given this a thread was appropriate for:
Have one thread per IO device like keyboard, mouse, monitor, etc. There should be one thread dedicated to CPU only and the CPU thread controls all the IO threads. The IO threads should be given the simple task of servicing requests on behalf of the CPU thread.
Onl
System of Systems (Score:3, Interesting)
Otherwise, as discussed in TFA there are certain problems that just don't parallelize. However, there are whole classes of algorithms that aren't developed in modern Computer Science because such stuff is too radical a departure from classical mathematics. Consider HTA [wikipedia.org] as a computing model. It is just too far away from traditional programming.
Parallel programing is just alien to traditional procedural declarative programming models. One solution is to abandon traditional programming languages and/or models as inadequate to represent more complex parallel problems. Another is to isolate procedural code to message each other asynchronously. Another is to create a system of systems... or combinations of both.
If there is a limitation it is in the declarative sequential paradigm and that limitation is partially addressed by all the latest work in asynchronous 'net technologies. These distributed paradigms still haven't filtered through the whole industry.
Re:Non-Repeatable Errors (Score:4, Insightful)