More Interest In Parallel Programming Outside the US? 342
simoniker writes "In a new weblog post on Dobbs Code Talk, Intel's James Reinders discusses the growth of concurrency in programming, suggesting that '...programming for multi-core is catching the imagination of programmers more in Japan, China, Russia, and India than in Europe and the United States.' He also comments: 'We see a significantly HIGHER interest in jumping on a parallelism from programmers with under 15 years experience, verses programmers with more than 15 years.' Any anecdotal evidence for or against from this community?"
Duh? (Score:4, Informative)
Young bucks want to be on the cutting edge to get the jobs that the old people already have.
----
Oh, and the people see the benefit in the other countries more than those in the U.S.? Probably not, we're just lazy American's though.
Re: (Score:3, Interesting)
The experienced programmers know that most parallelisable problems are already being solved by breaking it across machines, and the rest won't be helped by 15 bazillion cores. An extra core or so on a desktop is nice, beyond that they really won't be anywhere near the speedup its hyped.
Re:Duh? (Score:4, Informative)
1960: E. V. Yevreinov at the Institute of Mathematics in Novosibirsk (IMN) begins work on tightly-coupled, coarse-grain parallel architectures with programmable interconnects. ( c.f. [vt.edu] )
An extra core or so on a desktop is nice, beyond that they really won't be anywhere near the speedup its hyped.
And of course any virtual reality scenario will not profit from extra power.
CC.
Re:Duh? (Score:4, Informative)
It's more a matter of what kind of speedup you see, and what algorithm you start with.
If your algorithm is serial, and there is no parallelism to speed it up, you're not going to see any speed increase. This applies to a lot of things, such as access to the player-character status variables (all cores use the same memory address, so they need a mutex or other synchronization anyway). Any AI-NPC interactions would likely need to be coordinated. You could divide the large stuff across some collection of cores (physics core, AI core, book-keeping core, input core), but not all of those will break down further (so an 80 core machine is a limited speedup over a 4 core machine -- which is, itself, a limited speedup over a basic 2-core machine for most people).
The easy things to breakup are embarrassingly parallel problems (like certain graphics activities), and they are being broken up across GPUs already. Algorithms, even if they are entirely easy to parallelize, are still linear. To be 10 times faster, you need 10 processors (this is why GPUs have a lot of simple graphics pipelines and shader processors -- they're throwing more hardware at the problem). If a problem is simply too big (you need to be 1000 times faster, or exponential and beyond algorithms), more hardware isn't going to help.
People seem to think that parallel programming will make the impossible into something possible. That's not true. Proving NP = P and coming up with efficient algorithms for certain things will. More hardware won't -- it'll just make things that were O(n) or O(log n) [for a sufficiently large n, like the number of pixels in a 1920x1200 monitor] possible. It won't affect O(n^2) and above.
Re:Duh? (Score:5, Insightful)
Yes you can parallelize a VR system quite well. You can simulate a couple dozen NPCs per core, then synchronize on the collisions between the few that actually collide. You still get a nice speedup. It ain't 100% linear, but it can be pretty good. The frame-by-frame accuracy requirements are often low enough that you can fuzz a little on consistency for performance (that's usually done already. ever heard "If it looks right, it's right?").
Parallel programming is how we get more speed out of modern architectures. We're being told that we're not going to see Moore's law expand in GHz like it used to, but in multiple cores. Nobody things it's a panacea, except maybe the 13yr old xbox kiddies, but they never counted.
As for making impossible into possible, sure it will. There are lots of things you couldn't do with your machine 10-15 yrs ago, you can do now. Many systems have performance inflection points. As we get faster (either with a faster clock or a larger number of cores), we're going to cross more of them. I remember when you couldn't decode an mp3 in real time on a desktop machine. With the I/O and space costs of uncompressed music, that meant that you didn't really listen to music from your computer. Impossible -> Possible.
Re: (Score:2, Interesting)
Young bucks jump on the latest thing without thinking (or the experience to back their thoughts) of whether or not its the best way to go.
The experienced programmers know that most parallelisable problems are already being solved by breaking it across machines, and the rest won't be helped by 15 bazillion cores. An extra core or so on a desktop is nice, beyond that they really won't be anywhere near the speedup its hyped.
Mod this guy. Short and to the point.
I worked extensively with parallel programming since the early 90's. There is no silver bullet. Most problems do not parallelize to large scales. There are always special problems that DO parallelize well, like image and video processing. So, if you are a watching 20 video streams, your Intel Infinicore (TM) chip will be worth the $$$.
Re:Duh? (Score:5, Interesting)
I work in parallel programming too.
Most problems do not parallelize to large scales.
I'm getting tired of this nonsense being propagated. Almost all real world problems parallelize just fine, and to a scale sufficient to solve the problem with linear speedup. It's only when people look at a narrow class of toy problems and artificial restrictions that parallelism "doesn't apply". e.g. Look at google; it searches the entire web in milliseconds using a large array of boxes. Even machine instructions are being processed in parallel these days (out of order execution etc.).
Name a single real world problem that doesn't parallelize. I've asked this question on slashdot on several occasions and I've never received a positive reply. Real world problems like search, FEA, neural nets, compilation, database queries and weather simulation all parallelize well. Problems like orbital mechanics don't parallelize as easily but then they don't need parallelism to achieve bounded answers in faster than real time.
Note: I'm not talking about some problems being intrinsically hard (NP complete etc.), many programmers seem to conflate "problem is hard" with "problem cannot be parallelized". Some mediocre programmers also seem to regard parallel programming as voodoo and are oblivious to the fact that they are typically programming a box with dozens of processors in it (keyboard, disk, graphics, printer, monitor etc.). Some mediocre programmers also claim that because a serial programming language cannot be automatically parallelized that means parallelism is hard. Until we can program in a natural language that just means they're not using a parallel programming language appropriate for their target.
---
Advertising pays for nothing. Who do you think pays marketer's salaries? You do via higher cost products.
mod parent up etc. (Score:2)
I think too much emphasis is put, by some, on using a high level language that is specifically designed for parallelism. Personally I've always found C and POSIX threads more than adequate.
Re: (Score:3, Informative)
In regards to your comments about mediocre programmers...
You do not recognize the fact that most programmers are mediocre. You can scream at them that it is easy, but they will still end up staring at you like deer in the headlights.
Sorry - we are entering the "Model T" mass production era of software.
Re: (Score:3, Interesting)
I should have said: Most problems do not EASILY parallelize to large scales. ... You do not recognize the fact that most programmers are mediocre. You can scream at them that it is easy, but they will still end up staring at you like deer in the headlights.
Sorry - we are entering the "Model T" mass production era of software.
First, no one said parallelism is easy (in this thread anyways). I don't care that most "programmers" are mediocre and will never be able to understand parallelism, much like I don't care that most native English speaking Americans will never be able to understand nor speak Mandarin Chinese. They have about the same relevance to parallel programming as we're not talking about the masses being able to do parallel programming, speak Mandarin, or make Model Ts, for that matter.
Speaking of your "Model T" mass
Re:Duh? (Score:5, Informative)
Re: (Score:3, Insightful)
Easy (Score:2)
Handling event propagation through a hierachy of user interface elements.
Re:Duh? (Score:4, Informative)
In your compilation example, it is easy to get speedups at the compilation level using something like make -jN. But this assumes that each unit is independent. If you want to apply advanced global optimisations then this is not always the case, and then you hit the harder problem of parallelizing the compilation algorithms rather than just running multiple instances. It's not impossible but I'm not aware of any commercial compilers that do this.
Re: (Score:3, Interesting)
If a problem doesn't exhibit fine-grained parallelism then running multiple copies is the *best* you can do. In some situations that is enough (i.e. a large project with lots of separate compilation units). In some situations it isn't enough, i.e. where you can't split your compilation into separate units because you're trying to run global optimisations across the whole lot.
Re: (Score:3, Informative)
We both agree about how parallelism impacts jobs in the real world. We also agree that if we can speed a job up then we don't care how it is achieved. The point that I made that you keep skipping over is that when there is no fine-grained parallelism available the key question becomes do I care about speeding up multiple copies of a job, or do I need a single job to run faster.
The OP's choice of compilation was odd, which is what I remarked on. If I'm compiling
real world problem (Score:5, Funny)
Childbirth. Regardless of how many women you assign to the task, it still takes nine months.
(feel free to reply with snark, but that's a quote from Fred Brooks, so if your snarky reply makes it look like you haven't heard the quote before you will seem foolish)
Re:real world problem (Score:5, Funny)
Re:real world problem (Score:5, Funny)
No issue with parallel programming anywhere. (Score:3, Insightful)
Problems that can and should be parallelized in software already are for the most part. There is no issue here.
Business processes are often serial (step B depends on output of step A)
Re: (Score:3, Insightful)
Name a single real world problem that doesn't parallelize. I've asked this question on slashdot on several occasions and I've never received a positive reply.
GUIs (main event loop) for word processors, web browsers, etc. (Java feels slow because GUI code in Java is slower than human perception). Compilation (see below - it's a serial problem with a just-good-enough parallel solution). State machine simulations like virtualization or emulation. Basically, any task where the principle unit of work is either processing human input or complex state transitions. Only numerical simulations - problems with simple state transitions - parallelize well.
Real world problems like search, FEA, neural nets, compilation, database queries and weather simulation all parallelize well. Problems like orbital mechanics don't parallelize as easily but then they don't need parallelism to achieve bounded answers in faster than real time.
FEA, neural
Threads: Threat or Menace (Score:5, Insightful)
If these people take years to get it right, what makes you think *you* can get it right in a reasonable time?
The irony is that threads are only practical (from a correctness/debugging point of view) when there isn't much interaction between the threads.
By the way, I got that link from Drepper's excellent "What Every Programmer Should Know about Memory." [redhat.com] It also talks about how threading can slow things down.
Re: (Score:2, Insightful)
Software might be slightly better, as Moore's Law has been prodding us forward. On the other hand, given the number of us working in C-like languages (35+ years old), maybe with an OO twist (25+ years), to do web stuff (15-ish years), one funeral at a time might be more than we can manage. Legacy code, alas, can outlive its authors.
Re:Duh? (Score:5, Interesting)
For one thing, multiple homogeneous cores is NOT new (hetero- either, for that matter), just fitting them into the same die. I've used quad 68040 systems, where, due to the ability of the CPUs to exchange data between their copy-back caches, some frequently-used data items were NEVER written to memory, and on System V you could couple processing as tightly or loosely as you wanted. There are some problem sets that take advantage of in-"job" multi-processing better than others, just as some problem sets will take of advantage of multiple cores by doing completely different tasks simultaneously. Simple (very) example: copying all of the files between volumes (not a block-for-block clone); if I have two cores, I can can either have a multi-threaded equivalent of "cp" which walks the directory tree of the source and dispatches the create/copy jobs in parallel, each core dropping into the kernel as needed, or I can start a "cpio -o" on one core and pipe it to a "cpio -i" on the other, with a decent block size on the pipe. More cores means more dispatch threads in the first case, and background horsepower handling the low-level disk I/O in the other. In my experience, the "cpio" case works better than the multi-threaded "cp" (due, AFAICT, to the locks on the destination directories).
Re: (Score:2)
Re: (Score:2)
Outside of the older, first world programming communities, there are far more young people to old people. Old programmers typically make their living by having a resume with years of experience in a technology or three that aren't going anywhere (you'll be able to keep a roof over your head for the next 50 years keeping Java business systems running.)
Oversimplified (Score:4, Insightful)
It's more like when you've got enough experience, you already know what can go wrong, and why doing something might be... well, not necessarily a bad idea, but cost more and be less efficient anyway. You start having some clue, for example, what happens when your 1000 thread program has to access a shared piece of data.
E.g., let's say we write a massively multi-threaded shooter game. Each player is a separate thread, and we'll throw in a few extra threads for other stuff. What happens when I shoot at you? If your thread was updating your coordinates just as mine was calculating if I hit, very funny effects can happen. If the rendering is a separate thread too, and reads such mangled coordinates, you'll have enemies blinking into strange places on your screen. If the physics or collision detection does the same, that-a-way lies falling under the map and even more annoying stuff.
Debugging it gets even funnier, since some race conditions can happen once a year on one computer configuration, but every 5 minutes on some hapless user's. Most will not even happen while you're single-stepping through the program.
Now I'm not saying either of that is unsolvable. Just that when you have a given time and budget for that project, it's quite easy to see how the cool, hip and bleeding-edge solution would overrun that.
By comparison, well, I can't speak for all young 'uns, but I can say that _I_ was a lot more irresponsible as the stereotypical precocious kid. I did dumb things just because I didn't know any better, and/or wasted time reinventing the wheel with another framework just because it was fun. All this on the background of thinking that I'm such a genius that obviously _my_ version of the wheel will be better than that built by a company with 20 years of experience in the field. And that if I don't feel like using some best practice, 'cause it's boring, then I know better than those boring old farts, and they're probably doing it just to be paid for more hours.
Of course, that didn't stop my programs from crashing or doing other funny things, but no need to get hung up on that, right?
And I see the same in a lot of hotshots nowadays. They do dumb stuff just because it's more fun to play with new stuff, than just do their job. I can't be too mad at them, because I used to do the same. But make no mistake, it _is_ a form of computer gaming, not being t3h 1337 uber-h4xx0r.
At any rate, rest assured that some of us old guys still know how to spawn a thread, because that's what it boils down to. I even get into disputes with some of my colleagues because they think I use threads too often. And there are plenty of frameworks which do that for you, so you don't have to get your own hands dirty. E.g., everyone who's ever wrote a web application, guess what? It's a parallel application, only it's the server which spawns your threads.
Re: (Score:3, Informative)
Well there's your first mistake.
That's a recipe for disaster and built in limits to the number of players.
Ideally you seperate up your server app into multiple discrete jobs and process them with a thread pool. I don't know how well that maps to gaming but many server apps work very well with that paradigm.
Re: (Score:3, Informative)
E.g., let's say we write a massively multi-threaded shooter game....Debugging it gets even funnier, since some race conditions can happen once a year on one computer configuration, but every 5 minutes on some hapless user's. Most will not even happen while you're single-stepping through the program
Well that just says that you're doing it wrong. Sure, a massively concurrent system done in the manner you describe would be incredibly tricky to make work correctly. Of course that's not the only way to do it. With a little bit of thought and analysis going in you can write massively concurrent multi player games that just work right first time [usu.edu]. That's a system that had literally thousands of concurrent processes all interacting, with no deadlock or livelock, and no complex debugging to ensure that was th
Re: (Score:2)
> rather than the young bucks own favorite distro like, say, Gentoo, is the best pick for an
> Oracle server's OS
lol
(written on a Gentoo laptop, which is why I lolled)
What are the applications? (Score:3, Interesting)
Given that the management of threads isn't exactly the easiest thing in the world (not the hardest either, mind you), perhaps it would be more beneficial to let the OS determine which threads to run rather than trying to optimize a single app.
For example, many webservers use a semaphore rather than multiple threads to handle request dispatch. The result is that there is less overhead creating and cleaning up threads.
Re:What are the applications? (Score:4, Funny)
Re: (Score:3, Funny)
Sure, but you can now run your infinite loops in half the time as before.
Halving the time to run an operation? That's improving quality, right there.
Re: (Score:2)
So there, myth busted, show me your infinite loop and I'd show you my sledgehammer...
Re:What are the applications? (Score:4, Interesting)
Also: Management of threads is mostly hard because we're mostly still using such low-level tools to do it. Semaphores, locks, and threads are the equivalent of GOTOs, labels, and pointers. While you can build a higher-level system (message-passing, map/reduce, coroutines) out of semaphores, locks, and threads, I'd argue that's like writing everything in GOTOs instead of doing a proper for loop -- or using a for loop instead of a proper Ruby-style "each" iterator. At the risk of abusing my analogy, yes, sometimes you do want to get under the hood (pointers, threads), but in general, it's much safer, saner, and not that much less efficient to use modern parallel programming concepts.
There's still some real problems with pretty much any threaded system, but I suspect that a lot of the bad rap threads get is from insufficient threading tools.
Oh, and about that webserver -- using more than one process is pretty much like using more than one thread, and I'd say weighs about the same in this discussion. Webservers, so far, are able to get away with using one thread (or process, doesn't matter) per request (and queuing up requests if there's too many), so that's a bit less of a problem than, say, raytracing, compiling, encoding/decoding video, etc.
Re: (Score:3, Interesting)
The rest of your post, I am in agreement. I used to work on a very large high profile application framework. I
Threading is a headache pure and simple (Score:3, Interesting)
There are two problems: thread sychronization, and race conditions.
Race conditions
A modern CPU architecture uses multiple levels of cache, which aggrivate the race condition [wikipedia.org] scenario. For a programmer to code multi-threaded code, and "not-worry", then the architecture must always read and write every value to memory. This worst case scenario is only needed in a tiny fraction of cases. So the compiler can do much better by working with the memory model [wikipedia.org] of the architecture, instead of assuming that no m
Re: (Score:3, Interesting)
This is up to the OS to decide. The programmer's job is to provide something for the processors to do. If doing it serially is wasteful, doing it in parallel (or, at least, asynchronously) is the way to go.
Of course, when you think parallel and threads rely on each other's data, you need a suite of tests that can test it pro
Re: (Score:3, Insightful)
Questions (Score:5, Funny)
A1) to To other side. get the
Q2) Why did the multithreaded chicken cross the road?
A4) other to side. To the get
It is funnier in the original Russian.
Re: (Score:2)
Re:Questions (Score:5, Funny)
Q: How many multithreaded person(s) does it take to change a light bulb?
A: 5, 1 at each of the 4 ladders and 1 to pass the light bulb to the lucky one.
Q: How many multithreaded person(s) does it take to change a light bulb?
A: 4, each trying to screw the lightbulb.
Q: How many multithreaded person(s) does it take to change a light bulb?
A: I don't know what happened to them.
Experince (Score:5, Interesting)
That being said I think that if you want to actually make use of many cores you really do have to switch to a language that can give you usage of many threads for free. Writing it manually usually ends up with complications. I find Erlang to be pretty nifty when it comes to these things for instance.
Re: (Score:2)
ne reason could be that software engineers with more experience simply already know about these things, and have faced off against the many problems with concurrency. Threads can be hell to deal with for instance. So because of things they don't show any interest.
I think to make programming too different from natural human thought processes will result in less manageable code and probably less performance & profit for effort. Multithreading within an application is great for some things, like heavy mathematical tasks that aren't strictly linear, predictable jobs which can be broken up and pieced together later, etc. But why should programmers be forced to learn how to do this? I think it's up to operating systems and compilers to work this out, really. The poi
Re: (Score:2)
Human brains are not like brains and are not serial oriented:
http://scienceblogs.com/developingintelligence/2007/03/why_the_brain_is_not_like_a_co.php [scienceblogs.com]
But our conscious mind might be.
For the same reason a programmer should learn anything -- to understand what is going on in the back
Re: (Score:2)
For the same reason a programmer should learn anything -- to understand what is going on in the background.
Understanding is fine, but having to take into consideration that your threaded app needs to be able to run on a single, dual, n^2 core processors, putting in contingencies and trying to slice up your code for this purpose and at the same time making it readable and logical to onlookers, is a bit rich.
It's not the job of a C++ programmer to have to take into account where in memory their code might be running, or even what operating system is running, in many instances (of course they should be allowed t
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
This makes sense coming from this company, since one of their strong points always has been creating good development environments for the not-highly-specialized programmers o
Re: (Score:3, Insightful)
Computer software is notorious for not understanding what the operator wants ("It looks like you are writing a sorting algorithm.."), what makes you think this will be any different?
(I am not knocking GA coding methods but just using it as a blanket extension to job security is misguided at best)
Re: (Score:2)
Re: (Score:3, Interesting)
In ten years, efficient programming won't be difficult, it will be impossible unless we evolve our engeneering concepts dramatically to adapt to this paradigm shift (sorry for the cliche phrase but its apt).
I think I agree with this. I think that the key to this is going to be to go to an architecture based far more on message passing rather than shared memory (why? because we have really good evidence that it scales, and there's far less hair-loss involved when things go wrong; shared-memory parallelism is infamous for schrödingbugs and heisenbugs). The other advantage with doing this is that extending the program to work across more than one computer is far easier too.
I believe the only way, will be to use genetic algorithms (suited to multiprocessors them selves) to adaptively compile code. Effectively evolving it until its optimized.
I take it that this is using a ge
Re: (Score:2)
I guess you are talking about local desktop programming. Because in enterprisy development, multiprocesses or multithreads programming on server with more than 4 CPU is the most common environment since the mainframe times.
Experienced developers have been used to deal with complex parallel problems. The basic experience that most slashdotter should be familiar with is forking a process to deal a socket connection in plain C.
So th
Re: (Score:2)
spoken like a true parallel programming neophyte. 14 years ago, i was programming on a 64 core system. a couple of years before that, a 32 core system. when you don't have enough perspective on this matter, you're easily susceptible to some crazy idea that "eight! i mean think about that, eight cores is quite a bit". you're even more susceptible to the idea that parallel systems are going to follow some moore-ish curve, when they've already been in and out of fashion (and up and down in scale) several time
Re: (Score:2)
I find something strange in the way we're going wide rather than high especially with Intel CPUs. Most of the Core2 range overclock VERY nicely on air with the new e8400+ hitting the 4Ghz mark often with ease. With a GOOD motherboard, RAM and cooling 5Ghz is possible and even 6Ghz in the *extreme* (read liquid nitrogen) cases. I cant help but wonder if we wouldn't be seeing higher speeds if there wa
parallel for years (Score:2)
And what about seti@home, folding@home, and all the other massively parallel projects out there? Surely you're not saying that doesn't apply to multi-core either. I think that if you stop and look around you'll see it. But if you're only basing your opinion on your book sales, then maybe there's another problem.
More than a trend, it's a necessity (Score:5, Insightful)
Right now, parallel development techniques, education, and tools are all lagging behind the hardware reality. Relatively few applications currently make even remotely efficient use of multiple cores, and that includes embarrassingly parallel code that would require only minor code changes to run in parallel and no changes to the base algorithm at all.
However, if you look around, the tools are materializing. Parallel programming skills will be in hot demand shortly. It's just a matter of time before the multi-core install base is large enough that software companies can't ignore it.
It's been mainstream for years (Score:3, Insightful)
Multi process apps have been common in the business and server app space for almost two decades.
Multi thread apps have been common in the business and server world for a few years now too.
To all having the will it/won't it go mainstream argument: You missed the boat. It's mainstream.
Reinders Is Wrong: Threads Are Not the Answer (Score:5, Insightful)
Reinders is not an evangelist for nothing. He's more concerned about future-proofing Intel's processors than anything else. You listen to him at your own risk because the industry's current multicore strategy will fail and it will fail miserably.
Threads were never meant to be the basis of a parallel computing model but as a mechanism to execute sequential code concurrently. To find out why multithreading is not part of the future of parallel programming, read Nightmare on Core Street [blogspot.com]. There is better way to achieve fine-grain, deterministic parallelism without threads.
Re: (Score:2, Insightful)
Your COSA stuff has already been investigated by researchers before. Basically, describing functional programming in a graph doesn't achieve anything. And if you want to parallelise like this (which is still nowhere near as efficient as hand-optimised coarse-grained parallelisation):
Core 1 Core 2
(2*3) + (4*6)
(6+24)
you'll probabl
Re:Reinders Is Wrong: Threads Are Not the Answer (Score:5, Insightful)
Not really. There is a way to design a multicore processor such that only neighboring cores cooperate on related computation. It is part of the self-balancing mechanism. I can't go into detail but suffice it to say that if you keep your inter-core communication performance penalty at a fixed level regardless of the number of cores, you have a winner.
Re: (Score:2)
Threads were never meant to be the basis of a parallel computing model but as a mechanism to execute sequential code concurrently. To find out why multithreading is not part of the future of parallel programming, read Nightmare on Core Street. There is better way to achieve fine-grain, deterministic parallelism without threads.
You have it exactly right, but I don't think multicore processing is going away soon. Rather, I think the solution will be in things like hypervisors and changes in kernels, etc, to allocate resources in a responsible way. Multicore CPU's don't have to be useless, but this time it's a case of the mountain needing to come to Mohammad, not the other way around.
Re: (Score:2)
Really, a copy/paste troll on threading? WTF?
And yes, I'm calling it a troll unless you stop quoting that "150 years after Charles Babbage" BS, and start making your point within the comment, instead of in a rambling five-page blog post which links to a rambling whitepaper, at the end of which, we finally get a vague idea of what you might be talking about -- and we find that it's not really relevant to
Re: (Score:2)
The project itself I suppose is pretty interesting...
But I'll not beleive the everyone in computer science since Charles Babbage was wrong BS until the entire industry is using the model proposed.
Old dogs and new tricks (Score:5, Interesting)
Ok, few here are old enough to be able to. But take object oriented programming. I'm fairly sure a few will remember the pre-OO days. "What is that good for?" was the most neutral question you might have heard. "Bunch o' bloated bollocks for kids that can't code cleanly" is maybe more like the average comment from an old programmer.
Well, just like in those two cases, what I see is times when it's useful and times when you're better off with the old ways. If you NEED all the processing power a machine can offer you, in time critical applications that need as much horsepower as you can muster (i.e. games), you will pretty much have to parallelize as much as you can. Although until we have rock solid compilers that can make use of multiple cores without causing unwanted side effects (and we're far from that), you might want to stay with serial computing for tasks that needn't be finished DAMN RIGHT NOW, but have to be DAMN RIGHT, no matter what.
Re: (Score:2)
Their Research Labs are doing a lot of good work with experimental language features [tomasp.net], and many of them are getting their way into the
This makes sense coming from this company, since one of their strong points always has been creating good development environments for the not-highly-specia
Re: (Score:2)
To an extent they were right - remember at the time OO was being pushed as the ultimate answer to life, the Universe and Everything. If you read some of the more academic OO books even today they truly are "a Bunch o' bloated bollocks".
Then there was UML, Webservices, XML, Threading, Java,
Parallel tools are still pretty weak (Score:3, Informative)
Re: (Score:2, Interesting)
Re: (Score:2)
Secondly even assembler level compare and exchange isn't always atomic. It depends on the CPU. Intel (IIRC) has the lock prefix that is able to do that.
There are strategies you can use to handle non-atomic assembler instructions, and it's best to employ these where possibe.
I'm surprised Europe is down. (Score:2)
Parent is first reply that gets it... (Score:2, Insightful)
The future isn't "multi-threaded" unless you count SPMD, because architecturally the notion of coherent shared memory is always going to be an expensive crutch. Real high-performance stuff will continue to work with distributed, local memory pools and fast inter-node communication... whether the nodes are all on chip, on bus, in the box, in the da
Re: (Score:2)
(This is why most people stick with SIMD for local computing and MISD for grids.)
Those using MPI for parallelization are probably
That's why you use POSIX threading... (Score:2)
It's not that tough, really.
Parallel programming has been with us of years! (Score:4, Insightful)
Want to serve multiple user on multiple cpus with your web pages? Then write a single threaded program and let Apache handle the parallelism. Same goes for JavaEE, database triggers, etc. etc. going all the way back to good old CICS and COBOL.
It is very rare that you actually need to do parallel programing yourself. Either you are doing some TP monitor like program which schedules tasks other people have written in which case you should use use C and POSIX threads (anything else will land you in deep sewage) or you are doing serious science stuff in which case there are several "standard" fortran libraries to do multithreaded matrix math -- but if the workload is "serious" you should be looking at clustering anyway.
Well Said. (Score:2)
This whole article smells of hype...
On the contrary... (Score:2)
Who is this mythical "someone else"? I'd like to meet them. Incidentaly since when have database triggers been parallel systems? The LAST thing you want in a database is these sort of things running parallel, thats why you have locking!
"It is very rare that you actually need to do parallel programing yourself."
Err , if you count threads as parallel programming then
Re: (Score:3, Interesting)
Not So Great (Score:4, Insightful)
Been there, done that. Good from far, but far from good.
As an engineer straight out of college, I was very interested in parallel programming. In fact, we were doing a project on parallel databases. My take is that it sounds very appealing, but once you dig deeper, you realise that there are too many gotchas.
Considering the typical work-time problem, let's say a piece of work takes n seconds to complete by 1 processor. If there are m processors, the work gets completed in n/m seconds. Unless the parallel system can somehow do better than this, it is usually not worth the effort. If the work is perfectly divisible between m processors, then why have a parallel system? Why not a distributed system (like beowulf, etc.)?
If it is not perfectly distributable, the code can get really complicated. Although it might be very interesting to solve mathematically, it is not worth the effort, if the benefit is only 'm'. This is because, as per Moore's law [wikipedia.org], the speed of the processor will catch up in k*log m years. So, in k*log m years, you will be left with an unmaintainable piece of code which will be running as fast as a serial program running on more modern hardware.
If the parallel system increases the speed factor better than 'm', such as by k^m, the solution is viable. However, there aren't many problems that have such a dramtic improvement.
What may be interesting are algorithms that take serial algorithms and parallelise them. However, most thread scheduling implementations already do this (old shell scripts can also utilise parallel processing using these techniques). Today's emphasis is on writing simple code that will require less maintenance, than on linear performance increase.
The only other economic benefit I can think of is economy of volumes. If you can get 4GHz of processing power for 1.5 times the cost of a 2GHz processor, it might be worth thinking about it.
Re: (Score:2)
Considering the typical work-time problem, let's say a piece of work takes n seconds to complete by 1 processor. If there are m processors, the work gets completed in n/m seconds. Unless the parallel system can somehow do better than this, it is usually not worth the effort. If the work is perfectly divisible between m processors, then why have a parallel system? Why not a distributed system (like beowulf, etc.)?
Wait, what? How is this insightful? If you have a piece of work that can be completed faster by using multiple processors concurrently... why that sounds an awful lot like parallel processing! Also, to say that a problem that takes N seconds to complete with 1 processor takes N/M seconds to complete with M processors is just plain wrong. N/M seconds would be amount of time taken by a theoretically 'perfect' setup; it cannot be achieved in practice (I am fairly certain, can someone correct me if I'm wr
Re: (Score:2)
You're wrong. It can be achieved (in particular cases). It can even sometimes be surpassed - but then it's because of some superlinear effect - like the work being exactly the right size to not fit into cache or RAM on one processor, but being small enough to fit when distributed, so when distribution is used the cache is effectively used
Re: (Score:2)
Incorrect account of Moore's Law (Score:2)
If you are working on a single user system such as a word processor, parallelism has little significance. But if you are a Google, wanting to deliver similar but i
oops sorry (Score:2)
Re: (Score:2)
One algorithm that can be greatly parallelized is that of searching, which is a very frequent operation in all shorts of applications. Instead of searching all data, finding the result and then applying the required job on it, in a parallel environment the job can be executed as one of the threads finds the result; which means speeding up executions in tree an
Well, I'm nearly 60, looking at Erlang (Score:2)
Hardware cost vs education (Score:2)
The great 1960's and 70's engineering generations?
The DoD Ada people?
You now have generation wintel.
Dumbed-down, corporatized C coders.
One useless OS on a one or 4 core chip.
When game makers and gpu makers want to keep you in the past due to
the lack of developer skill you know you have problems.
The rest of the world may not have the best hardware, but they do try and educate the
next generation
Re: (Score:2)
Maybe I'm not that dumbed down, but we are out there and I don't think I'm particularly special...
Hi, European dev guy here.... (Score:2)
We hear over and over again "it's not goping to go mainstream", "it's hard to get right", "threads are a recipe for disaster", "synchronisation is just too much work to debug", "it's a niche area" etc etc
Even occasionally "It'll never take off".
Well, guys, it has. A lot of peopl
WTF question is this???? (Score:4, Insightful)
- When Intel was still a barely known brand, other companies were already selling heavy-iron machines with multiple CPUs for use in heavy server-side environments (didn't ran Windows though). Multi-cores are just a variant of the multiple-CPU concept.
The spread of Web applications just made highly multi-threaded server-side apps even more widespread - they tend naturally to have multiple concurrent users (<rant>though some web-app developers seem to be sadly unaware of the innate multi-threadness of their applications
As for thick client apps, for anything but the simplest programs one always needs at least 2 threads, one for GUI painting and another one for processing (either that or some fancy juggling a la Windows 3.1)
So, does this article means that Japan, China, India and Russia had no multi-CPU machines until now
An emerging class of problems (Score:2)
SDR, software defined radio. One of these can easily saturate a gigabit ethernet link with data, and we'll want the computer to automatically sort signal from noise, and determine if we're interested in t
Read TFA (Score:2)
>my Threading Building Blocks (TBB) book, sold more copies in Japan in Japanese
>in a few weeks than in the first few months worldwide in the English vesion
>(in fact - it sold out in Japan - surprising the publisher!)
>contributors and users of the TBB open source project are worldwi
Threads v. Processes (Score:2)
The other day I realized that I don't really know why threads are supposed to be better than processes, other than because "multithreading" sounds cooler and it's not old and boring and well-understood like "multiprocessing". I'm asking this sincerely: why do people only talk about multithreaded processing whenever parallel programming comes up lately? It seems like IPC is marginally easy with threads but that design is much trickier, so what's the big win? Is it a CPU optimization thing?
New or old programmers, still a HARD problem (Score:3, Interesting)
As many have said, large scale parallel systems are not new. Just because we need a solution to the problem doesn't mean that it will appear any time soon. Some problems are very difficult and involve not only new technologies and programing models but major re-educational efforts. There are many topics in physics and mathematics that only a small number of people have the intellectual skill and predilection to handle. Of all the college graduates, what percent complete calculus, let alone take advanced calculus? Pretty small number.
My prediction is that the broad base of programmers will have the tools and be able to do some basic parallelism. A small number of programmers will do the heavy duty parallel programming and this will be focus on very high value problems.
BTW, this Intel guy, while addressing a real issue, seemed to be doing marketing for his toolkit/approach. Sounds like a guy trying to secure his budget and grow it for next year.
Maybe because we've seen the hype cycle before... (Score:4, Insightful)
And then, we entered the die shrink and clock speed up era, clock speeds doubled every 14 months or so for ten years, and we went smoothly from 60 MHz to 2 GHz. Much of the enthusiasm for parallel programming died away - why sweat blood making a parallel computer when you can wait a few years and get most of the same performance?
Clock speeds hit a wall again about five years ago. If the rate of increase stays small for another five years, the current cycle drought will have outlasted the 1980's slowdown. I have a great deal of sympathy for parallel enthusiasm (I hacked on a cluster of 256 Z80's in the early 80's), but I think it won't really take off until we really have no other choice, because parallelism is hard.
Re: (Score:2)
Re: (Score:2)
Re:You can still make large apps without concurren (Score:5, Funny)
Re: (Score:2)
Instead of addressing the root problems with concurrency, we are just going see super-high-level languages that have no bearing or relationship to the actual hardware the underlying machine.
In my opinion and experience, higher level tools are useful for multiproc machines, if not completely necessary. For example, if you break up a matrix multiplication into nested loops in C, you've lost some essential high-level information about the problem. The compiler can try and guess whether it can be parallelized, and it's never quite perfect. It makes more sense to convey the original problem to the compiler, using higher level constructs. I've used Fortran 90 with a good compiler to do just this.
Re: (Score:3, Insightful)