How We'll Program 1000 Cores - and Get Linus Ranting, Again 449
vikingpower writes For developers, 2015 got kick-started mentally by a Linus Torvald rant about parallel computing being a bunch of crock. Although Linus' rants are deservedly famous for the political incorrectness and (often) for their insight, it may be that Linus has overlooked Gustafson's Law. Back in 2012, the High Scalability blog already ran a post pointing towards new ways to think about parallel computing, especially the ideas of David Ungar, who thinks in the direction of lock-less computing of intermediary, possibly faulty results that are updated often. At the end of this year, we may be thinking differently about parallel server-side computing than we do today.
Mutex lock (Score:5, Funny)
All other ended up in a mutex lock situaton so I had chance to do the first post
Re:Mutex lock (Score:5, Funny)
Re:Mutex lock (Score:5, Funny)
A lot of US were busy-waiting.
Re:Mutex lock (Score:5, Funny)
Re: (Score:3)
In any case - a multi-core machine can also handle multiple different tasks simultaneously, it's not always necessary to break down a single task into sub problems.
The future for computing will be to have a system that can adapt and avoid single resource contention as much as possible.
Re: (Score:3)
The core is already dozens of times faster than memory and thousands of times faster than storage
When you add more cores, you also can add more memory bandwidth, if you couple them closely to memory controllers. This is how multiprocessor PCs work today. Hell, even some processors with more cores in them have more memory buses, it's not just adding chips that gives you more bandwidth.
Linus Lock (Score:3)
It isn't, though, except for integer operations and tossing things around. Floating point core elements have a ways to go yet to get to single cycle for everything, and so spreading math among cores still saves time. OS folk like Linus may tend to think in terms of byte-to-BusSize manipulation. A lot of us deal with more nuanced data and operations. I *guarantee* you that a multicore processor will chew up properly designed image manipulation tasks a go
Re: (Score:3)
I use SSD, you insensitive clod!
Then after all blocks on the drive has been written to, you wait for a second while the drive moves data away and clears a sector so there's space to write to.
SSDs have far better average write speeds, but far worse worst case write speeds. Using them for anything timing critical without a battery backed up controller is asking for trouble.
"Use TRIM", I hear from the peanut gallery. Except that there are no RAID controllers (or software RAIDs) that actually support TRIM in practice. Nor does TRIM work fo
Pullin' a Gates? (Score:4, Interesting)
"4 cores should be enough for any workstation"
Perhaps it's an over-simplification, but if it turns out wrong, people will be quoting that for many decades like they do Gates' memory quote.
Re: (Score:2)
Linux Workstation: 16cores = way faster builds than 4 cores.
CAD workstation: I imagine alot of geometry processing is parallelized... the less waiting the better (either format conversion or generating demo videos etc.. eat up alot of CPU)
Video workstation: Thats just a blatantly obvious use for multiple cores...
Linux HTPC: I wanna transcode stuff fast... more cores
Linux Gaming: These days using at least 4 cores is getting more common...
Things that I often seen that are *broken* for in
Re:Pullin' a Gates? (Score:5, Interesting)
If you went and read Linus' rant, then you'll find you are actually reinforcing his argument. He says that except for a handful of edge use-cases, there will be no demand for massively parallel in end user usage and that we shouldn't waste time that could be better spent optimizing the low-core processes.
The CAD, video and HTPC use-cases are already solved by the GPU architecture and don't need to be re-solved by inefficient CPU algorithms.
Your Linux workstation would be a good example, but is a very low user count requirement and can be done at the compiler level and not the core OS level anyway.
Your Linux gaming machine shouldn't be doing more than 3/4 cores of CPU and handing the heavy grunt work off to the GPU anyway. No need for a 64 core CPU for that one.
Redesigning what we're already doing successfully with a low number of controller/data shifting CPU cores managing a large bank of dedicated rendering/physics GPU cores and task-specific ASICs for things like 10GB networking and 6GB IO interfaces is pretty pointless, which is what Linus is talking about, not that we only need 4 cores and nothing else.
Re: (Score:2)
So, if someone wants to optimize a critical app-specific operation "foo()" in their app and make it to go 4 times faster using 4 cores, they are crazy?
Your argument implies that other than these so-called
Re: (Score:2)
Re: (Score:2)
What if the cores don't become much smaller while cores are added to your PC? Your general desktop/workstation can have up to 16 cores each of which are more powerful than the previous generation core. Should we still do single-threaded programming for any time-critical foo() and run roughly 10 times slower?
There's plenty of code than would benefit from the speedup of multi-core programming, not just some niche code.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
It already is wrong...
Linux Workstation: 16cores = way faster builds than 4 cores.
Did the 4 core CPU have 1/4th of the transistor count of the 16 core CPU? Then I'd expect it to be much slower of course. Point of Linus was, a 4 core CPU with same transistor count (used for more cache, better out-of-order execution logic, more virtual registers, and so on), as 16 core CPU will be faster on almost every task. So cores beyond 4 (the number Linus threw as the ballpark count) make sense only, if you really can not spend any more transistors in making those 4 cores faster, but still have die s
Re: (Score:3)
Except that Bill Gates never actually said the so-called "quote" that is attributed to him.
Re: (Score:3)
Thanks, interesting document, found here [uwaterloo.ca]. The audio is really bad at the beginning and fluctuates throughout the talk. The interesting bit that you refer to is at 21 minutes from the start.
I'm trying to type in what he said directly from the audio:
The 16-bit design gave us a megabyte of memory. The 8086 has a 20-bit address. It is really a segmented 16-bit data path with segment registers that are really indexes. It is a 1-MB address space. And in this original design I took the upper 384K and tied it to a certain amount to provide for memory video, the ROM and I/O. And that left 640K for general purpose memory. And that leads to today's situation where people talk about the 640K barrier. The limit to how much memory you can put to these machines. I have to say that in 1981 while making those decisions I felt like I was providing enough freedom for 10 years. That is, a move from 64K to 640K felt like something that would last a great deal of time. Well, it didn't. It took only 6 years before people started to see that as a real problem.
Fortunately, there is a reasonable solution. Intel has moved forward with its chips families, the 286 chip introduced in 1984 moves us to a 24-bit address space (mumbles about segmented indirection, being not that good). That is sort of an intermediate milestone. in 1986 we moved up to the 386 where we get a full 32-bit offset to these segments that have been designed in this architecture. So what we have is a machine that can address 4GB of RAM. And I have to say with all honesty, I believe that it will take us more than 10 years to use up that address space.
So he never makes that exact quote, however one can understand why people picked it up. Essentially, BG thought in 1981 640K would be enough for everybody for a long while. Note that he was reasonably prudent regarding using up the 32-bit address space (that ship
Re: (Score:3, Insightful)
Why not? Currently Firefox has problems rendering (loading) two pages simultaneously, although it should be able to handle tens, using several cores.
Same with Evince (which is crap anyway), it cannot do anything in parallel, should be able to use tens of cores.
Javascript? Although the language is the worst I have seen since APL, a smart compiler could at least in some cases parallelize it (maybe with speculative execution or like).
And so on.
It will turn out to be as wrong as "640k".
Re: (Score:3)
hmmm... Linus sounds right to me too. He specifically said, or almost, that people wanting to load 10 pages in sandboxed firefox process/thread in parallel could find a use for 16 cores ;-)
Re: (Score:2)
But by the time you've finished reading the first paragraph of the first page, the other nine are loaded even if you can't parallise.
Re: (Score:2)
Nope, I only say that because I already thought the same way before I was aware of his view. It happens all the time.
Re:Pullin' a Gates? (Score:5, Insightful)
Why not? Currently Firefox has problems rendering (loading) two pages simultaneously, although it should be able to handle tens, using several cores.
Same with Evince (which is crap anyway), it cannot do anything in parallel, should be able to use tens of cores.
Javascript? Although the language is the worst I have seen since APL, a smart compiler could at least in some cases parallelize it (maybe with speculative execution or like).
And so on.
It will turn out to be as wrong as "640k".
Javascript is generally used in event driven manner, so it will perform quite well on a single core. Firefox having trouble loading multiple pages simultaneously should still be IO-bound, not CPU-bound, and if the engine has trouble, then it's an SW architecture problem where more cores will not really help.
Point of Linus was, taking a 6 core CPU, and replacing 2 cores with more cache and more transistors per core should make almost anything on Desktop run faster.
Re: (Score:3)
Point of Linus was, taking a 6 core CPU, and replacing 2 cores with more cache and more transistors per core should make almost anything on Desktop run faster.
There's an element of truth to this, but on the other hand, cache space is already big enough that it hits the law of diminishing returns. Yes, the biggest performance hits in current computing are cache misses. But cache misses are already unexpected events, and cache misses are of biggest concern to the user when there are lots of them at once -- ie when iterating through a large bit of data. Text searches on large documents in a complex format (eg MS Word). Making a global change to a large file. These a
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
First, that's with a single thread and a single security context. If each one is an isolated sandbox it's not the case (trust me on this: it's my research area and we've done a lot of benchmarking). Second, even if it were true, it would be a lot less power efficient. If you can parallelise your workload, then two 1.5GHz cores will use less power than one 3GHz one. Four 750MHz cores will use less still.
Until a few years ago, most computers had a single core, so there wasn't much point trying to explo
Re: (Score:2)
It doesn't have to be truly parallel, just separate. There's a difference.
Re: (Score:3, Interesting)
Linus's argument basically boils down to, "Parallel algorithms are sorcery, and the only place they matter are places applications that demand performance which are indeed increasingly using parallelism".
Of course you don't need, say, a 50-threaded version of vi or alsamixer or whatever. But for apps that need performance, increasingly they have to get them from threading. And there's nothing "magical" about parallelism. Perhaps in Linus's dislike for C++ he's missed how trivially easy it's gotten to launch
Re: (Score:2)
Instead of paraphrasing why not just quote him directly? It's not a long article and no one will think 'strawman'.
"Big caches are efficient. Parallel stupid small cores without caches are horrible unless you have a very specific load that is hugely regular (ie graphics)." ...
"the crazies talking about scaling to hundreds of cores are just that - crazy."
In that context, he's right. If you're doing hundreds of dumb cores you should be using gpu already.
Re: (Score:3)
Duh. And that's obviously not what is being discussed here. Step up a level or 20 in the call stack.
"largely" meaning "does a bunch of stuff on its own and only briefly needs to lock common data structures to update based on the results of what it's been doing". That is by far the most common case in the real world. If you have a texture loading thread for a game it only needs to briefly lock the texture structur
Re: (Score:3, Interesting)
It is a niche which will need specific algorithms tuned for the hardware (GPU or other) the pipeline must be kept busy to observe a performance gain. It doesn't scale to general purpose computing.
I feel like this is moving the goal posts. "You will never do massively parallel computing on a CPU because if it's massively parallel it's a GPU not a CPU."
Linus is 100% wrong. What's the "general purpose" computing that we all want? The NCC-1701D's main computer from star trek. If I say "Cortana/Siri/Google Now please rough me out a flyer for our yardsale on Saturday." you're going to be looking at massively parallel task for the neural networks to not only interpret the voice but then make sense of
Re: (Score:2)
Re:Pullin' a Gates? (Score:4, Informative)
If massive-neural nets do reach common use (Which isn't that likely, they are somewhat overhyped) then I'd expect to see specific accelerators designed to run them. Probably something like FPGAs: Software writes the net, hardware executes it. A general-purpose processor (Probably x64 or ARM) does the coordinating, but augmented by specialised or semi-specialised hardware for certain tasks. Very much as we have today with hardware acceleration of 3D graphics or video decoding.
You can see the trend already. 3D acceleration was introduced for graphics, but then repurposed for other things, and followed up with revised graphics architectures designed for non-graphics applications. They are still useless for general-purpose computing, their architecture too limited, but used in conjunction with a general processor they can greatly outperform the processor alone on things like image processing, cryptographic tasks, physics simulation and such. It's now quite common to see even consumer applications, with games using physics simulation to provide much more detailed rigid-body simulation than was previously possible - ie, more bits of shrapnel and chunks of corpse bouncing around when you lob that grenade.
As for neural nets, you probably won't see much need to simulate huge ones. Small ones work surprisingly well, and their applications are really quite limited - they aren't some magic AI bullet that turns into a functional mind if you make them big enough. They excel at classification tasks, so they ar very handy in OCR, handwriting recognition, speech recognition and such. Google made one that can recognise cats, and if you can recognise cats then you can recognise other things, so straight away I'm seeing applications in web filter software.
Re: Pullin' a Gates? (Score:2, Insightful)
There has been a push back against integrating ANNs into mobile platforms. I think low power real time classification is simply missing an application in the mass market that can't be solved by off loading to a server. We simply assume that we are continuously connected to a sufficiently large data pipe and the problem goes away. Whether the hardware changes on the server side or not is a question of power savings, but I doubt we will see gains in performance over software implemented on server farms.
Tha
Re:Pullin' a Gates? (Score:4, Insightful)
Actually the quote is just an internet myth, at least no one has ever found a source for it or anyone that even reports to have heard him say it and gates denies having said it as well.
Re: (Score:2)
exactly, he never made the statement he is quoted as saying. There is a massive difference between what he is quoted as saying and what is said in that presentation. He also discusses in other interviews how he wanted the limit to be higher but was restricted by the chip architecture but thought it would be good enough for the lifetime of the architecture, he was actually pretty close to being right.
Re: (Score:2)
Microsoft had supposedly had a very large influence over which chip went into the IBM PC. supposedly they are the reason IBM went with a 16bit chip instead of an 8 Bit one as they talked IBM into changing and they were also considering a 32 bit chip from motorola.
Linus should try git (Score:4, Funny)
...a tool which he may have heard off. It does connectionless, distributed data management, totally without locks.
Re:Linus should try git (Score:4, Informative)
Also, git is not totally without locks. Try seeing if you can commit at the same time as someone else. It can't be done, the commits are atomic.
Re: (Score:3)
My point is that git knows how to merge. It knows when a merge is required, when it is not, and when it can be done automatically. If you design your data structures properly, the same behaviour can be used in massively parallel systems.
Core of the article (Score:2)
Re:Core of the article (Score:4, Insightful)
The idea isn't that the computer ends up with an incorrect result. The idea is that the computer is designed to be fast at doing things in parallel with the occasional hiccup that will flag an error and re-run in the traditional slow method. How much of a window you can have for "screwing up" will determine how much performance you gain.
This is essentially the idea behind transactional memory: optimize for the common case where threads that would use a lock don't actually access the same byte (or page, or cacheline) of memory. Elide the lock (pretend it isn't there), have the two threads run in parallel and if they do happen to collide, roll back and re-run in the slow way.
We see this concept play out in many parts of hardware and software algorithms actually. Hell, TCP/IP is built on having packets freely distribute and possibly collide/drop with the idea that you can resend it. It ends up speeding up the common case: that packets make it to their destination along 1 path.
Re: (Score:2)
I'm wondering about what he is thinking for real-world details. For example, a common use case is one thread does searches through a data structure to find an element (as, say, a pointer or an iterator), but before it can dereference it and try to access the memory, some other thread comes along and removes it from the list and frees it. Then your program tries to dereference a pointer or iterator that's no longer valid and it crashes.
The problem isn't that it's no longer in the list. Clearly the other thre
Re: (Score:3)
There are cases where getting exactly the right answer doesn't matter - real-time graphics is a good example. It's amazing the level of error you can have on an object if it's flying quickly past your field of view and lots of things are moving around. In "The Empire Strikes Back" they used a bloody potato and a shoe as asteroids and even Lucas didn't notice.
That said, it's not the general case in computing that one can tolerate random errors. Nor is the concept of tolerating errors anything new. Programme
How parallel does a Word Processor need to be? (Score:4, Interesting)
Or a spreadsheet? (Sure, a small fraction of people will have monster multi-tab sheets, but they're idiots.)
Email programs?
Chat?
Web browsers get a big win from multi-processing, but not parallel algorithms.
Linus is right: most of what we do has limited need for massive parallelization, and the work that does benefit from parallelization has been parallelized.
Re: (Score:2)
Re: (Score:2)
Emacs is a bad OS that can only use one core. If you use Erc (an irc client inside of Emacs), you will notice the real pain of Emacs. While Erc tries to reconnect to a server, you can do absolutely nothing, not even changing to another buffer
Isn't there a version of select() inside emacs? In other words, some kind of non-blocking connect?
Re: (Score:3)
Or a spreadsheet? (Sure, a small fraction of people will have monster multi-tab sheets, but they're idiots.)
Email programs?
Chat?
Web browsers get a big win from multi-processing, but not parallel algorithms.
Linus is right: most of what we do has limited need for massive parallelization, and the work that does benefit from parallelization has been parallelized.
This is kind of silly. Rendering, indexing and searching get pretty easy boosts from parallelization. That applies to all three cases you've listed above. Web browsers especially love tiled parallel rendering (very rarely these days does your web browser output get rendered into one giant buffer), and that can apply to spreadsheets to.
A better question is how much parallelization we need for the average user. While the software algorithms should nicely scale to any reasonable processor/thread count, on the
Re: (Score:2)
indexing and searching get pretty easy boosts from parallelization.
How much indexing and searching does Joe User do? And what percent is already done on a high-core-count server where parallel algorithms have already been implemented in the programs running on that kit?
Web browsers especially love tiled parallel rendering
Presuming that just a single tab on a single page is open, how CPU bound are web browsers running on modern 3GHz kit? Or are they really IO (disk and network) bound?
Re: (Score:2)
The key question is, what are as many common example cases one can list (in order of frequency times severity) where users' computers have lagged by a perceptible amount which in any way reduced their user experience, or caused the user to have to forgo features that would otherwise have been desirable? Then you need to look at the cause.
In the overwhelming majority of cases, you'll find that "more parallelism with more cores" would be a solution. So why not just bloody do it?
Not everybody suffers performan
Re: (Score:2)
If I change the font size, it recalculates the pages and page breaks for the whole book. One CPU running at 100% for a very long time. For a five hundred page book, no problem. For a ten thousand page book, big problem. I'd love it if the re-pagi
Re: (Score:2)
First thought: Why the hell aren't those two (total of) 10,000 page "books" split into their constituent 50 "actual" books?
That's the kind of parallelization and work optimization that needs to take place before algorithm changes.
Re: (Score:2)
Re: (Score:2)
Yes, and it re-rerenders all the pages as bitmaps at 400% zoom, scales them back down to get proper anti-aliased results, then compresses them with JPEG and stores them into main memory... ...or how about just recalculating the page that you need to display?
Parallel processing is not gonna solve stupidity.
Bad summary, shocking (Score:5, Interesting)
Linus doesn't so much say that parallelism is useless, he's saying that more cache and bigger, more efficient cores is much better. Therefore, increased number of cores at the cost of single core efficiency is just stupid for general purpose computing. Better just stick more cache to the die, instead of adding a core. Or that is how I read what he says.
I'd say, number of cores should scale with IO bandwidth. You need enough cores to make parallel compilation be CPU bound. Is 4 cores enough for that? Well, I don't know, but if the cores are efficient (highly parallel out-of-order execution) and have large caches, I'd wager IO lags far behind today. Is IO catching up? When will it catch up, if it is? No idea. Maybe someone here does?
Re: (Score:2)
Some I/O won't catch up that easily, you can't speed up a keyboard much, and even though we have SSDs we have a limit there too.
But if you break up the I/O as well into sectors so that I/O contention on one area don't impact the I/O on another by using a NUMA architecture for I/O as well as RAM then it's theoretically possible to redistribute some processing.
It won't be a perfect solution, but it will be less sensitive.
No locks (Score:3)
Ungar's idea (http://highscalability.com/blog/2012/3/6/ask-for-forgiveness-programming-or-how-well-program-1000-cor.html) is a good one, but it's also not new. My Master's is in CS/high performance computing, and I wrote about it back around the turn of the millenium. It's often much better to have asymptotically or probabilistically correct code rather than perfectly correct code when perfectly correct code requires barriers or other synchronizing mechanisms, which are the bane of all things parallel.
In a lot of solvers that iterate over a massive array, only small changes are made at one time. So what if you execute out of turn and update your temperature field before a -.001C change comes in from a neighboring node? You're going to be close anyway? The next few iterations will smooth out those errors, and you'll be able to get far more work done in a far more scalable fashion than if you maintain rigor where it is not exactly needed.
Mmm... Cores... (Score:2)
'make -j64 bzImage' (Score:2)
Re: (Score:2)
Wrong question. C compilation has linear speedup as each file can be compiled without knowing the others. The question is how he links his kernel, and the answer is on a single core as there is no other sane way to do it. Fortunately, this problem is almost linear in the input size (assuming good hash-tables), or we would not be having any software in the size of the kernel.
Re: (Score:2)
C compilation has linear speedup as each file can be compiled without knowing the others
As long as I/O bandwidth is infinite.
Re: (Score:2)
Or you do it on separate machines. But yes. Ideally, it has linear speed-up, if I/O is not a bottleneck. In practice, things are not as nice, although with 4...8 cores and an SSD to feed them you do not notice much.
Re: (Score:2)
No, as long as I/O bandwidth is not the limiting factor. The sort of thing you're compiling can have radically different CPU vs. I/O requirements. Some simple but verbose C code with little optimization might be almost entirely IO limited while some heavy templated C++ and full optimization might be almost entirely CPU limited.
The thing is, there's no way to know what is going to cause a particular person to think "I wish my computer was performing faster". It all depends on the individual and what they use
Re: (Score:2)
Re: (Score:2)
weird (Score:2)
The central claim of Linus seem to be that there are many people out there who claim an efficiency increase by parallelism. While i agree that many people claim (IMHO correctly) a increase in the performance (reduction of execution time) within the constraints given by a specific technology level by doing symmetric multiprocessing, i have not heard many people to claim that efficiency (in terms of power, chip area, component count) is improved by symmetric, general parallelization; and nobody with a good un
Re: (Score:3)
The central claim of Linus seem to be that there are many people out there who claim an efficiency increase by parallelism.
They do, and to an extent they are correct.
On CPUs that have high single thread performance, there is a lot of silicon devoted to that. There's the large, power hungry, expenive out of order unit, with it's large hidden register files and reorder buffers.
There's the huge expensive multipliers which need to complete in a single cycle at the top clock speed and so on.
If you dispense with
Shi's Law, Gustafsson's Law, Amdahls Law (Score:3, Insightful)
http://developers.slashdot.org... [slashdot.org]
http://spartan.cis.temple.edu/... [temple.edu]
http://slashdot.org/comments.p... [slashdot.org]
"Researchers in the parallel processing community have been using Amdahl's Law and Gustafson's Law to obtain estimated speedups as measures of parallel program potential. In 1967, Amdahl's Law was used as an argument against massively parallel processing. Since 1988 Gustafson's Law has been used to justify massively parallel processing (MPP). Interestingly, a careful analysis reveals that these two laws are in fact identical. The well publicized arguments were resulted from misunderstandings of the nature of both laws.
This paper establishes the mathematical equivalence between Amdahl's Law and Gustafson's Law. We also focus on an often neglected prerequisite to applying the Amdahl's Law: the serial and parallel programs must compute the same total number of steps for the same input. There is a class of commonly used algorithms for which this prerequisite is hard to satisfy. For these algorithms, the law can be abused. A simple rule is provided to identify these algorithms.
We conclude that the use of the "serial percentage" concept in parallel performance evaluation is misleading. It has caused nearly three decades of confusion in the parallel processing community. This confusion disappears when processing times are used in the formulations. Therefore, we suggest that time-based formulations would be the most appropriate for parallel performance evaluation."
.
Poor slashdot... (Score:3, Insightful)
Few are actually people with a real engineering background anymore.
What Linus means is:
- Moore's law is ending (go read about mask costs and feature sizes)
- If you can't geometrically scale transistor counts, you will be transistor count bound (Duh)
- therefore you have to choose what to use the transistors for
- anyone with a little experience with how machines actually perform (as one would have to admit Linus does) will know that keeping execution units running is hard.
- since memory bandwidth has no where near scaled with CPU apatite for instructions and data, cache is already a bottleneck
Therefore, do instruction and register scheduling well, have the biggest on die cache you can, and enough CPUs to deal with common threaded workflows. And this, in his opinion, is about 4 CPUs in common cases. I think we may find that his opinion is informed by looking at real data of CPU usage on common workloads, seeing as how performance benchmarks might be something he is interested in. In other words, based in some (perhaps adhoc) statistics.
Re: (Score:2)
Good summary, and I completely agree with Linus. The limit may go a bit higher, up to say, 8 cores, but not many more. And there is the little problem that for about 2 decades, chips have been interconnect-limited, which is a far harder limit to solve than the transistor-one, so the problem is actually worse.
All that wishful thinking going on here is just ignorant of the technological facts. The time where your code could be arbitrary stupid, because CPUs got faster in no time, is over. There may also be ot
From personal experience... (Score:2)
But the fact that work is distributed to several cores is just secondary for that kind of work. It is also easy to make most work-intensive code use multipl
Linus is right (Score:4, Insightful)
Nothing significant will change this year or in the next 10 years in parallel computing. The subject is very hard, and that may very well be a fundamental limit, not one requiring some kind of special "magic" idea. The other problem is that most programmers have severe trouble handling even classical, fully-locked, code in cases where the way to parallelize is rather clear. These "magic" new ways will turn out just as the hundreds of other "magic" ideas to finally get parallel computing to take off: As duds that either do not work at all, or that almost nobody can write code for.
Really, stop grasping for straws. There is nothing to be gained in that direction, except for a few special problems where the problem can be partitioned exceptionally well. CPUs have reached a limit in speed, and this is a limit that will be with us for a very long time, and possibly permanently. There is nothing wrong with that, technology has countless other hard limits, some of them centuries old. Life goes on.
Ripe for Revolution (Score:3)
Nothing significant will change this year or in the next 10 years in parallel computing.
You might be right but I'm far less certain of it. The problem we have is that further shrinking of silicon makes it easier to add more cores than to make a single core faster so there is a strong push towards parallelism on the hardware side. At the same time the languages we have are not at all designed to cope with parallel programming.
The result is that we are using our computing resources less and less efficiently. I'm a physicist on an LHC experiment at CERN and we are acutely aware of how ineffic
Two points on Linus' post (Score:2)
1.) Linus' wording is pretty moderate.
2.) He's right. Again.
Lots of moving parts (Score:5, Informative)
There are lots of moving parts here. Just adding cores doesn't work unless you can balance it out with sufficient cache and main memory bandwidth to go along with the cores. Otherwise the cores just aren't useful for anything but the simplest of algorithms.
The second big problem is locking. Locks which worked just fine under high concurrent loads on single-socket systems will fail completely on multi-socket systems just from the cache coherency bus bandwidth the collisions cause. For example, on an 8-thread (4 core) single-chip Intel chip having all 8 threads contending on a single spin lock does not add a whole lot of overhead to the serialization mechanic. A 10ns code sequence might serialize to 20ns. But try to do the same thing on a 48-core opteron system and suddenly serialization becomes 1000x less efficient. A 10ns code sequence can serialize to 10us or worse. That is how bad it can get.
Even shared locks using simple increment/decrement atomic ops can implode on a system with a lot of cores. Exclusive locks? Forget it.
The only real solution is to redesign algorithms, particularly the handling of shared resources in the kernel, to avoid lock contention as much as possible (even entirely). Which is what we did with our networking stack on DragonFly and numerous other software caches.
Some things we just can't segregate, such as the name cache. Shared locks only modestly improve performance but it's still a whole lot better than what you get with an exclusive lock.
The namecache is important because for something like a bulk build where we have 48 cores all running gcc at the same time winds up sharing an enormous number of resources. Not just the shell invocations (where the VM pages are shared massively and there are 300 /bin/sh processes running or sitting due to all the Makefile recursion), but also the namecache positive AND negative hits due to the #include path searches.
Other things, particularly with shared resources, can be solved by making the indexing structures per-cpu but all pointing to the same shared data resource. In DragonFly doing that for seemingly simple things like an interface's assigned IP/MASKs can improve performance by leaps and bounds. For route tables and ARP tables, going per-cpu is almost mandatory if one wants to be able to handle millions of packets per second.
Even something like the fork/exec/exit path requires an almost lockless implementation to perform well on concurrent execs (e.g. such as /bin/sh in a large parallel make). Before I rewrote those algorithms our 48-core opteron was limited to around 6000 execs per second. After rewriting it's more like 40,000+ execs per second.
So when one starts working with a lot of cores for general purpose computing, pretty much the ENTIRE operating system core has to be reworked verses what worked well with only 12 cores will fall on its face with more.
-Matt
Re: (Score:2)
Re: (Score:3)
Re:i'm so tired of political correctness (Score:5, Insightful)
Re: (Score:3)
+1 this would make the best gravestone ever.
Re: (Score:2)
Fuck you. You can't tell me what I can think or say.
So, what you're saying is... his right to tell you things is trumped by your wish to not hear things? Freedom of speech does not mean what you think it means...
Re: (Score:2)
Torvalds is half right (Score:5, Insightful)
The problem is that Linus is discussing two different things at once and so it sounds like he's making a more inflammatory point than he is.
The issue is not whether parallelism is uniformly better for all tasks. The question is, is parallelism better for some tasks. And as Torvalds points out, those tasks do exist (Graphics being an obvious one).
The nature of the workload required for most workstations is non-uniform processing of large quantities of discreet, irregular tasks. For this, parallelism (as Torvald's correctly notes) is likely not the most efficient approach. To pretend that in some magical future, our processing needs can be homogenized into tasks for which parallel computing is superior is to make a faith-based prediction on how our use of computers will evolve. I would say that the evidence is quite the opposite: That tasks will become more discrete and unique.
Some fields though: finance, science, statistics, weather, medicine, etc. are rife with computing tasks which ARE well suited to parallel computing. But how much of those tasks happens on workstations. Not much, most likely. So Linus' point is valid.
But I have to take issue of Linus tone in which he downplays "graphics" as being a rather unimportant subset of computing tasks. It's not "graphics". It's "GRAPHICS". That's not a small outlier of a task. Wait until we're all wearing ninth generation Oculus headsets... the trajectory of parallel processing requirements for graphics is already becoming clear -- and it's stratospheric. The issue is this: Our desktop processing requirements are actually slowing and as Linus points out, are probably ill-suited for increased parallelism. But our graphics requirements may be nearly infinite.
Unlike other fields of computing, we know where graphics is going 20 years from now: It's going to the "holodeck".
Keep working on parallel computing guys. Yes, we need it.
Re: (Score:2)
Re:Torvalds is half right (Score:4, Informative)
Re: (Score:3)
In essence, it's already done that way. A System-on-a-chip (SoC) typically has a couple of general-purpose cores, along with sound and video processors. In a full-sized PC, the graphics processing is usually taken to another chip -- in fact another circuit board entirely. Because most of the work the graphics processor (=GPU) does is largely independent of the main processor (=CPU) (the CPU pushes in the data, says "do X with it", the GPU then churns away through the data) it doesn't need to be closely link
Re: (Score:3)
Why not design multi-purpose chips that have some cores optimized for some tasks, and other cores optimized for others
We do have those. Any CPU with an iGPU is such a chip. We've had such CPUs for years and years now. Have you missed out on the last decade of CPU design?
Re: Torvalds is half right (Score:5, Informative)
Re: (Score:2)
Linus sounds like a programmer from 40 years ago
Not necessarily a bad thing to sound like, IMO; 40 years ago you had to think and actually be insightful about what you were undertaking, because the tools and resources were so limited. And, as somebody else has already mentioned, Linus isn't against graphics and multi-core, he is against the stupid fad that blindly demands more cores at the expense of producing better cores (as well as the idiocy of wrapping everything in a graphical front-end, when that actually ends up getting in the way of doing the jo
Re: (Score:2)
Only if you have zero clue about what he is talking about. Note: It is not possible to deduce validity from the way something sounds. That requires actual insight.
Re:Programs people want to use... (Score:4, Insightful)
Indeed. There's tons of CPU-intensive tasks that need to be done in a modern computer game, but they're typically done as:
Rather than...
I really hope with how easy it's gotten in C++11 that more people will make better use of threads. In the first example code, not only do you relegate all of your tasks to the same core, thus hitting performance, but if any one task hangs, all of them hang. It's a terrible approach, but it's the most common. The only case where threads aren't good is where you're doing heavy concurrent read/writes to the same cached data, but in real world apps there's almost always a level where you can launch the thread where this isn't the case, if it's even an issue to begin with in your particular application. The presumption that concurrent access to cached memory will usually or always be a problem (which seems to be Linux's presumption) requires that A) your threads not doing the majority of their work on thread-local memory, AND B) that the shared data area being read from / written to concurrently is small enough to be cached, AND C) you can't just migrate your threads up in scope N levels to work around any such issue.
Re: (Score:3)
BZZT, fail.
1) You didn define launch_thread.
2) my_struct_array was said, and I quote, "a local-context data structure", so congrats, your data is going to go out of scope on you.
3) The concept of having to write that is absurd because "for (auto&i : container)" is a "do whatever you want, any number of steps, no matching function signature required, inline, on any container whatsoever" built into C++11, *and* it's something that anyone who knows C++11 will know rather being something you brewed yoursel
Re: (Score:2)
Only if you have a single I/O device and channel.
NUMA architectures can also apply to disks and other I/O devices.
Of course - it comes with a new set of problems, but there's no golden solution.
Re: (Score:2)
Something I wish I could have in a workstation again is a full-fledged crossbar switch like the Octane and Octane 2 had.
Re: (Score:2)
And what most usage is on a computer is actually concurrency.
Massive parallelism is a special case, and even then you suffer from concurrency.
Re: (Score:3)
I remember an issue I had a few months ago... we were doing some image processing using HTML canvas element on a web app... Then we wanted a nightly job to use the same code, so we whip out a node.js script. Once it was done, to make sure it worked the same way, we compared the result...
They were different. Spent 2 days trying to debug it (they were using the same code for the most part, wtf?).
At the time, I didn't know about http://en.wikipedia.org/wiki/Canvas_fingerprinting [wikipedia.org]canvas fingerprinting Most of th
Re: (Score:3)