Multithreading - What's it Mean to Developers? 357
sysadmn writes "Yet another reason not to count Sun out: Chip Multithreading. CMT, as Sun calls it, is the use of hardware to assist in the execution of multiple simultaneous tasks - even on a single processor. This excellent tutorial on Sun's Developer site explains the technology, and why throughput has become more important than absolute speed in the enterprise.
From the intro: Chip multi-threading (CMT) brings to hardware the concept of multi-threading, similar to software multi-threading. ... A CMT-enabled processor, similar to software multi-threading, executes many software threads simultaneously within a processor on cores. So in a system with CMT processors, software threads can be executed simultaneously within one processor or across many processors. Executing software threads simultaneously within a single processor increases a processor's efficiency as wait latencies are minimized. "
-1, Redundant: Hyperthreading. (Score:2, Insightful)
Not exactly the same. (Score:5, Informative)
Simultaneous multi-threading [15],[16],[17] uses hardware threads layered on top of a core to execute instructions from multiple threads. The hardware threads consist of all the different registers to keep track of a thread execution state. These hardware threads are also called logical processors. The logical processors can process instructions from multiple software thread streams simultaneously on a core, as compared to a CMP processor with hardware threads where instructions from only one thread are processed on a core.
SMT processors have a L1 cache per logical processor while the L2 and L3 cache is usually shared. The L2 cache is usually on the processor with the L3 off the processor. SMT processors usually have logic for ILP as well as TLP. The core is is not only usually multi-issue for a single thread, but can simultaneously process multiple streams of instructions from multiple software threads.
1.4 Chip Multi-Threading
Chip multi-threading encompasses the techniques of CMP, CMP with hardware threads, and SMT to improve the instructions processed per cycle. To increase the number of instructions processed per cycle, CMT uses TLP [8] (as in Figure 6) as well as ILP (see Figure 5). ILP exploits parallelism within a single thread using compiler and processor technology to simultaneously execute independent instructions from a single thread. There is a limit to the ILP [1],[12],[18] that can be found and executed within a single thread. TLP can be used to improve on ILP by executing parallel tasks from multiple threads simultaneously [18],[19].
Re: (Score:3, Informative)
Re:-1, Redundant: Hyperthreading. (Score:5, Informative)
Tiny amount of background:
Hardest part when trying to run things in parallel is figuring out what you can run in parallel. Example: two operations (pseudocode): c=a+b and d+c+e. These two cannot be run in parallel, since you need to result of a+b before you can start c+e.
With modern operating systems there are many programs running at one time, and they may contain seperate threads. One assumption of threading is that threads can run asynchronously to one another - you will not get a situtation like that above (okay, okay, I'm simplying!).
With Hyperthreading, Intel gets the CPU to pretend to the OS that there are actually two of them. They duplicate the fetch and decode units, but only use one execute unit - which probably has several FPUs and Integer units. They rely on an FPU or an Integer unit being available to be able to get a performance benefit.
So Intel (up til now) have duplicated the fetch and decode, but still had the same execute unit.
Suns approach is to replicate the whole pipeline - fetch, decode, execute. Intel can't really scale hyperthreading beyond two "processors", whereas Sun are aiming to try and execute 8, 16 or even more at one time.
Because of Intels architecture they can't really scale hyperthreading in this way - for lots of reasons. I'm sure other people can add them.
This really won't be of huge benefit to your Doom3 FPS, but for business apps (think J2EE) or message queues or science applications it will allow compute servers to scale better at heavy loads (i.e. when lots of threads are doing something that isn't IO bound, at the same time).
Games in general would *LOVE* this if done right. (Score:4, Interesting)
Now if you're working with a multithreaded CPU, one processor can be handling your CPU-bound graphics work (much of this is handed off to the video card anyhow), another can be doing sound/surround mixing, etc.
In an FPS with complicated AI, you could theoretically hand that off to CPU #2 while #1 is handling different things. Your graphics engine might not have ugly-mofo-alien #235 onscreen to render, but meanwhile he's watching you and looking for a boulder that will offer him good cover to snipe you from instead of just sitting like a drone waiting for a computer-acurate headshot.
Now let's say that PC's going multi-CPU. Maybe you don't need a single superpowerful processor, just a videocard and a few lower-powerful processors. Processor #1 is handing off the environmental data, #2 is prepping it for rendering and shovelling your GPU full of vertices, #3 is playing pinpoint surround for that cricket chirping behind the rock on your far left, and #4 is doing AI for ugly alien mofo #287.
When I think about how games are advancing a lot can come down to interprocess communications and/or bandwidth limitations. The GPU still handles much of the video stuff so your CPU isn't really a bottleneck there in many cases, but as internet connections speed up then you're going to have MMORPGs, FPS's, and more chock full of "actors" that make up sight, sound, physics, and AI that could very well benefit from more CPU's rather than extra ticks on your overclocked single processor.
After all, eye-candy is only a part of realism. True realism is also very much about a multitude of things happening at once.
Re:-1, Redundant: Hyperthreading. (Score:4, Informative)
Fyi, hyperthreading is used on intel because the number of instructions in-flight. The processor during a context switch interupts, saves to the stack, clears out the REGFILE, rename table, and the ROB, losing all the work accomplished that is not written back to the Regfile. So on an AMD processor this is not a huge deal, but on the P4 this is a problem because the frequent context switches that occur on modern systems cause the intel design to lose the advantage of having many instructions in flight. AMD could realize performance gains just not as much and at the cost of clockspeed.
As for CMT, no it is essentially hyperthreading but could be a better, more costly, more effective design than intels simple design. Duplication of a pipeline is a multicore chip which Sun is doing with Niagra.
Re:-1, Redundant: Hyperthreading. (Score:4, Informative)
Not at all, because you can add d+e. For example:
Re:-1, Redundant: Hyperthreading. (Score:3, Interesting)
The moderators are on crack, today. Intel's hyperthreading is more of a marketing gimmik (which you fell for). It provides, what, a few percent improvement in performance?
The fact is that Intel's Pentiums spend most of their time _not_doing_anything_at_all_. They just sit their waiting on data.
Sun's Niagara will be able to queue 32-threads simultaneously, which 8 of those threads computing (8 cores). My guess is that Sun's analysis showed that, on average, three threads are waiting on memory while one
Re:-1, Redundant: Hyperthreading. (Score:3, Interesting)
On AMD's side, the decoder has quadruple outputs and IIRC, AMD's average is 3 out of 4 so again 75% from maximum.
By adding SMT, Intel gave the P4 the potential to keep all instruction ports busy and AMD plans to do the same next year... a single-core A64
Complementary concepts (Score:2, Insightful)
Since the Pentium 4 according to Intel [intel.com], but it's not a good question as that's Intel's trademarked term for their two-thread implementation of simultaneous multithreading [wikipedia.org]:
By contrast, Niagara is implementing Chip-level multiprocessing [wikipedia.org]:
it means a lot (Score:4, Informative)
Multi-threading comes with synchronization, semaphore, mutex, etc, once you know how to deal with them, it's easy.
Re:it means a lot (Score:2)
that's all well and good from a developer standpoint. but for the end user, the problem is going to be software availability.
witness altivec: apple's vector processing [apple.com] promised to offer all sorts of wild and crazy performance gains, but the prospect of massive refactoring of existing codebases prevented it from being widely adopted. the result is that even though your spiffy new g5 has altivec under the hood, aside from photoshop, there isn't really any soft
Re:it means a lot (Score:3, Informative)
Re:it means a lot (Score:3, Informative)
Re:it means a lot (Score:2, Informative)
Re:it means a lot (Score:2)
Nope. Niagara is SPARC and will run Solaris. Just like any other Sun server.
Re:it means a lot (Score:4, Informative)
(to dumb it down: no new opcodes, existing software will benefit if it can, break if it was poorly written to begin with.)
Re:it means a lot (Score:5, Insightful)
I know how to deal with them. It may seem easy at first, but it's actually very hard. Your program can run for days before a thread synchronization bug surfaces and it finally deadlocks. And since it's timing dependent, you can't reproduce it.
In principle there are rules to follow to avoid deadlocks and race conditions, but since they need to be manually enforced, there's always potential for error. At least with memory access bugs the hardware often shows you a segfault; with synchronization problems you usually don't even get that.
I've learned over the years that preemptive multithreading should be used only as a last resort, and even then, it's best to put exactly one synchronization point in the entire app. Self-contained tasks should be dispatched from that point and deliver their results back with little or no interaction with the other threads.
The worst thing you can do is randomly sprinkle a bunch of semaphores, mutexes, etc. all over your app.
Re:it means a lot (Score:2, Interesting)
I've been programming multithreaded code for a while, too, and giant locking (which is what you describe) is not very efficient much of the time for what I've done in the past. Linux and Solaris had this type of architecture for the kernel at one time and they've long since evolved away from that.
In short, how you use threads really depends on what you are trying to do. Hammeri
Re:it means a lot (Score:5, Interesting)
I've learned over the years that preemptive multithreading should be used only as a last resort, and even then, it's best to put exactly one synchronization point in the entire app. Self-contained tasks should be dispatched from that point and deliver their results back with little or no interaction with the other threads.
Exactly, and that's where design patterns come into play... many of these problems have been formally described in patterns you can follow to avoid this; with thread synchronization, you can use the Half-Sync/Half-Async pattern for example, and you can make a task an Active Object so it can deliver its own results...
Multi-Threaded programming is hard, very hard; but you're not alone who thinks it's hard, and many researchers have formally described a bunch of rules you can follow... if you follow these rules, you often enough eliminate most of the more complicated problems.
Re:it means a lot (Score:2)
There are several patterns that are very useful for safe multithreaded programming. Properly applied they can greatly reduce the risks of multithreaded programming while reap some of the benefits.
Multi-threaded programming in complex applica
Re:it means a lot (Score:2)
I disagree. Multithreading is very important for virtually every major business (think j2ee/server) app around, even GUI apps shouldn't be doing much work in the GUI thread.
However, you are right that you need to be very careful. I would reccomend trying to cut your program into well defined modules (OO programming, coming back again), and then attempt to make each of them as atomic as possible. Also, be careful of callbacks. It's best to only make callbacks with threads that you know for sure cannot be ho
Re:it means a lot (Score:5, Insightful)
Furthermore, we need to get rid of lazy programming. I'm tired of watching people write slow, lazy, inefficient (in terms of both memory space AND speed) code, and justify its existence with "it'll run fast on the new über-hyper-monkey-quadruple-bucky processors." Too many times, the problem is that you've got slow code running in every thread. If the code wasn't so damned lazy, programmers would care more about nifty new hardware. We're not even coming close to using our current hardware to capacity. I've got a 1.2GHz processor with 1024Mb of RAM, and my box chugs opening an M$ Word doc?! WTF?!
<soapbox>
Most programming in the world is very similar to the universal statu$ symbol in the U.S.A. - a big gas-guzzling SUV. It's not like Jane the Soccer Mom really needs 300hp to haul her kids and groceries around town. Similarly, we have lots of lazy code out there that doesn't do much of anything but consume resources and pollute the environment. A nifty new processor feature won't be noticed in the computing world because it won't get used anyway, just like Jane the Soccer Mom wouldn't notice 100 more horsepower. </soapbox>
Re:it means a lot (Score:3, Insightful)
Pure bullshit.
Re:it means a lot (Score:4, Interesting)
Multithreading? (Score:4, Funny)
..but wouldn't it be even better if it was hyper-multi-threading?
Re:Multithreading? (Score:2)
Hyper-multi-threading (Score:2)
Of course - all things are better when they're hyper*. Of course they tend to jump from A to B so quickly everything becomes blurry. Besides, jumping into hyper-multi-threading [isn't] like dusting crops, boy!
* See Compu-Global-Hyper-Mega-Net.
stackless.. (Score:5, Interesting)
the whole state pickling concept is pretty cool, and kind of throws threads all over..
Nothing new. (Score:3, Interesting)
From the lack of non-Sun-supplied buzz regarding this technology, it would appear that many people aren't finding it very exciting.
Re:Nothing new. (Score:3, Interesting)
Re:Nothing new. (Score:4, Interesting)
What's not exciting about a 32-way single board computer? You don't have to program for it any differently than a 32-way SMP mainframe. Solaris does the rest for you.
Like hell it is (Score:5, Informative)
More like none of Sun's competitors have anything which comes remotely close.
Notice how nearly a year after Sun announced this, intel finally admitted that clock frequency (i.e. gigahertz) isn't everything and that they'd be bringing out dual core processors?
Niagara has 8 cores each capable of 0-clock cycle latency switching between 4 different thread contexts.
Who else has working hardware and an OS to go that can do this?
Bigger version of an existing idea. (Score:3, Informative)
Re:really....? (Score:3, Informative)
I wasn't even assuming they have that much. The minimum you need to make this trick work is two independent contexts. That means two copies of all kernel-visible control and data registers. You would probably not need to save internal microstate unless you need it to restart a long-running instruction.
Anything else on top of that is optimization.
Bruce
Re:really....? (Score:4, Interesting)
Now, in some sophisticated designs, which is what I'd expected the P4 to do, was to turn the extra parallel execution units into independant ones, so you could issue 2 or 3 instructions simultaneously, and forgoe all the branch prediction, etc.
Turns out that the P4 20 stage pipeline needed help. SMT/Hyperthreading was it.
Re:Bigger version of an existing idea. (Score:3, Informative)
Um. The whole point of blades is that they don't have the expensive interconnect. So, here's an architecture with the interconnect in the chip, which would make it cheaper. But it's still not clear to me that a big system built around one die that uses transistors really efficiently is going to be less expensive than eight smaller systems that don't use their transistors as efficiently.
Bruce
Re:Nothing new? Quite the opposite (Score:2)
It makes good sense to fix the bottleneck, because that's where the problem lies. Improving other parts which don't have problems, according to Amdahl, is A Bad Idea (:-))
Re:Nothing new. (Score:4, Informative)
Here is a very informative article [theinquirer.net] on the Niagara design.
For the lazy some main points from the article.
- The Pentium 4 is a single core dual threaded CMT implementation. The Niagara has 8 cores and each core is capable of executing 4 threads.
- Depending on the model of the application that is executing, a programmer can choose to either utilize it as a single process with multiple threads each mapped on to a hardware thread or as multiple processes mapped to hardware threads. Apart from this, individual cores can also be assigned to an individual process, adding one more level of flexibility.
- Sharing data between threads on the same core is an L1 read and is extremely fast. Sharing data among threads on separate cores is an L2 read (since L2 is shared among cores)
- The new chip provides a lot of flexibility in terms of how the programmer wants to allocates hardware threads across software processes or threads. But it looks like programming on it will be difficult unless the operating system provides very good support for it.
Same thing SMP and such has meant (Score:5, Insightful)
This is nothing new. The decreasing returns and impending limits of single threaded processing has been upcoming for a long time now.
Re:Same thing SMP and such has meant (Score:3, Insightful)
Not really. If you've been using SMP servers, what's different about SMP on a chip? Even if you only have a few dozen Apache processes running, Solaris will schedule them onto Niagara just like if you had lots of separate CPUs.
I don't think this is as big a change as people think. The main advantage will be a super-efficient CPU (50 to 60 watts, IIRC) but with the performance of many regular CPUs (hundreds of watts).
Re:Same thing SMP and such has meant (Score:2)
A crucial difference between a processes and threads are that threads are sharing (concurrently) that same data in the same adress space. So, having many processes are not anything like having multiple threads.
Re:Same thing SMP and such has meant (Score:3, Insightful)
With threads you have to syncronize access to common data that resides in the same memory adress space. With processes you don't have to do this as they have their own copy of the data at fork.
Re:Same thing SMP and such has meant (Score:3, Insightful)
I also imagine that if you can try to line up thread boundaries with object boundaries, the task of avoiding race conditions becomes almost trivial.
But then, I haven't done much serious multithreaded programming, so maybe I am missing the point. Someone set me strai
I am not really that impressed. (Score:2, Troll)
Re:I am not really that impressed. (Score:2)
Perhaps I'm misunderstanding you, but yes, I believe Irix supports this for its ccNUMA machines, where the 'distance' between CPUs (and associated memory) can vary quite a bit. If you've got a single system image running on 10 machines with 2 CPUs apiece, you really don't want it to treat every CPU as adjacent to every memory area.
INKEY and TSRs (Score:3, Funny)
marketing handwave (Score:2, Insightful)
Thruput ... (Score:2, Funny)
So it seems they invented a way to linearly scale peformance. WOW! But maybe I misunderstood and the thing i
Efficiency and latency are mutal tradeoffs (Score:4, Interesting)
Re:Efficiency and latency are mutal tradeoffs (Score:2, Insightful)
Sun are putting in hardware to ensure that context switches are fast (possibly even one or two cycles); hopefull
Re:Efficiency and latency are mutal tradeoffs (Score:3, Informative)
Skiming the article, it doesn't even seem this processor bothers with out-of-order execution or register renaming; if it stalls, it just starts issuing from a different thread.
Re:Efficiency and latency are mutal tradeoffs (Score:3, Informative)
What DOES it mean to me? (Score:5, Insightful)
It worries me how many people just say "it means faster programs and doesn't take much more work". That mindset leads to lazy programmers who A - Can't optimize to save their jobs; and B - Don't actually understand what multithreading really does.
If you consider it easy, you've either just thrown great big global locks on most of your code, in which case your code doesn't actually parallelize well; or you've written what I refer to in my first sentence - Bugs that take an immense effort just to reproduce, nevermind track down and fix.
Re:What DOES it mean to me? (Score:2)
Bleah. FUD. (Score:3, Informative)
Bleah. FYI, I'm pretty sure GCC will reject this. Even the newest versions.
I just tested it with GCC 2.95.3, 3.2.1, 3.3, and 3.4.2, and it works fine. Of course, GCC is just ignoring the #pragma. I didn't know about OpenMP before this, but it does look like a good way to "optimize later" and have your code still compile with gcc. And you don't have to write and maintain two different versions separated by #ifdef, #else, #endif.
In the loosely coupled world? little (Score:2)
But thats outside the point - in the new world of very many cheap rackmount servers clustered together, loose coupling has taken over. Maybe if the world had turned out differently and was dominated by big servers, threading would have caught on.
Hyperthreading (Score:2, Interesting)
I was skeptical at first, and read some of those articles showing that some applications could actually run slower. But then I tried it for myself, and I have to admit I've been impressed. My main box is a dual-Xeon, each with Hyperthreading turned on. It appears to Linux as if I have four independent CPUs. A few numer
way to get it wrong (Score:5, Insightful)
As many others have already pointed out, Intel has had Hyperthreading available in Pentium 4 and Xeon CPUs for a couple of years now, which does exactly what the article is talking about.
As many others know, you know exactly nothing about what you are talking about. HT has basically two sets of registers so that during a cache miss which would cuase a bubble the chip switches to the other set so it doesn't sit idle. Suns chip on the other hand actually have multiple corses physically doing work at the same time. In fact were it not for Intel's hideously flawed NetBurst architecture the hideous hack that is HyperThreading would not provide any preformance increase at all (in fact it doesn't as much provide an increase as much as negate a decrease...). For evidence consider how many Pentium Ms have HT on them... Now I may not be fully correct but I didn't volunteer a comment; I only posted to prevent the misinformation of others. You'll find more on ArsTechnica [arstechnica.com]. I'd link to the article but I can't find anything on their redesigned site.
Re:way to get it wrong (Score:3, Interesting)
Dude, you don't know anything either. P4's hyperthreading is a two-threads implementation of Simultaneous multithreading [wikipedia.org]. Niagara is an 8-way multiprocessor on a chip, and each processor has four-way simultaneous multithreading, exactly like the P4, just with more threads.
Regarding the amount of concurrent threads, it's basically equivalent to a 16-way Xeon server with hyperthreading enabled, but with much faster inter-process
Re:Hyperthreading (Score:3, Interesting)
Re:Hyperthreading (Score:5, Interesting)
"Intel has had Hyperthreading available in Pentium 4 and Xeon CPUs for a couple of years now, which does exactly what the article is talking about"
You are wrong. Period. Sun's CMT is several independent CPU cores on the same die with a huge bandwidth interconnect on-die. Intel's Hyperthreading is a gimmicky technology that has a very small real-world impact on performance.
And your personal "benchmarks" cite no numbers. I be trolled!
In most cases (Score:5, Informative)
The real issue is how large each thread can be (in the matter of memory) before it has to access data that is external to the thread. It may mean a lot for gamers running close to reality games and also for those that are doing massive calculations.
The important thing is that developers has to be aware of the possibilities and limitations around this technology. Otherwise it would be like throwing a V8 into a T-Ford. It is possible, but you would never be able to utilize the full power.
Another thing is that todays programming languages are limited. C (and C++) are advanced macro assemblers (not really bad, but it requires a lot of the programmer). Java has thread support, but it's still the programmer (in most cases) that has to decide. Java is not very efficient either, which of course is depending on which platform it's running on in combination with general optimizations. C# is Microsoft's bastard of Java and C++ with the same drawbacks as Java.
There are other languages, but most of them are either too obscure (like Erlang [erlang.org] or Prolog [gnu.org]) or too unknown.
The point is that a compiler shall be able to break out separate threads and/or processes whenever possible to improve performance. It is of course necessary for the programmer to hint the compiler where it may do this and where it shouldn't, but in any way try to keep the programmer luckily unknowing about the details. The details may depend on the actual system where the application is running. i.e. if the system is busy with serving a bunch of users then the splitting of the application into a bunch of threads is ot really what you want, but if you are running alone (or almost alone) then the application should be permitted to allocate more resources. The key is that the allocation has to be dynamic.
Anybody knowing of any better languages?
CSP and libthread (Score:5, Informative)
Also see Brian W. Kernighan's "A Descent into Limbo" [vitanuova.com] and Dennis M. Ritchie's "The Limbo Programming Language" [vitanuova.com].
And of course Hoare's classic: Communicating Sequential Processes [usingcsp.com].
Now you can enjoy the power and beauty of the CSP model in Linux and other Unixes thanks to plan9port [swtch.com] including libthread and Inferno [vitanuova.com]; yes, it's all Open Source.
Old idea, but there are many ways to implement (Score:3, Informative)
Actually, the "best" way to implement the design is to split the thread state from the processing elements, then use locking on the elements. If two threads use independent processor elements, they should be simultaneously executable.
By having many instances of the more common processing elements, you would have many of the benefits of "multi-core" (in that you'd have parallel execution in the general case) but the design would be much simpler because you're working at the element level, not the core level.
Yes, none of this is really any different from hyperthreading, multi-core, or any other parallel schemes. All parallel schemes work in essentially the same way, because they all need to preserve states and lock resources.
Personally, I think REAL Parallel Processing CPUs that can handle multiple threads efficiently are already well-enough understood, they just have to become reasonably mainstream.
For myself, I am much more interested in AMD's Hyper Tunneling bus technology, which looks like it could supplant most of the other bus designs out there.
My J2EE Application wil FLY (Score:4, Insightful)
For me, these new chips simply mean increased performance for deployed apps, without any modification to the app code.
Beauty!
whore for +1 informative (Score:2)
http://en.wikipedia.org/wiki/Simultaneous_multith
This is just Multi-core processing... (Score:5, Interesting)
One way to look at this is Sun maximizing their existing engineering efforts. However, by marketing it as some revolutionary feature advance, they're implying that they've done something new and exciting, as opposed to something that IBM is already doing and AMD and Intel are working on.
Beyond that, Sun and Fujitsu have a co-manufacturing and R&D deal now, confirming something those in the enterprise space have been saying for a long time - Fujitsu was making better Sun servers than Sun.
Plus Sun killed plans for the UltraSparc V, leaving only the Niagra. They have the Opteron line pushing up from below, and rapidly evaporating sales at the high end. They're resorting to marketing gibberish to add new features to the product line, while simultaneously offloading R&D and manufacturing to a partner.
Remind me again why Sun is in the hardware business?
Thanks,
Matt
Re:This is just Multi-core processing... (Score:5, Informative)
Despite the fact that Sun markets the UltraSparc IV as a single processor, software licensors like BEA and Oracle require that you license their software PER CORE. This means that a "4 processor" UltraSparc IV requires 8 processor licenses for Oracle or Weblogic.
Sun never tells you this, and consequently a lot of people suddenly get tagged with additional licenses if they get audited. BEYOND that, Sun tells people that they can "double their performance" by replacing all of their UltraSparc IIIs with UltraSparc IVs, not explaining that they are doubling their performance because they're doubling the number of processors, AND that doing that upgrade can put them on the hook for literally hundreds of thousands of dollars in software cost.
We've seen a number of companies get bitten by that, and it is downright disingenuous of Sun.
Thanks,
Matt
Re:This is just Multi-core processing... (Score:3, Insightful)
Oracle's been talking about reworking their licensing for a long time, and I agree licensing by core is sub-optimal. However, Oracle is being forthright that they charge by core, while Sun is _hiding_ the fact the USIV _is_ a multi-core processor.
Sure, Oracle are the ones charging per processor core, but Sun is the company that is selling this upgrade as a painless, cost-effective way to upgrade their infrastructure. I firmly believe they are being negligent in not warning customers that this is a multi
Re:This is just Multi-core processing... (Score:2)
Re:This is just Multi-core processing... (Score:5, Informative)
The reason this is so is because it functions as both a chip multi-processor and as a multi-threaded core (although I think I'd consider their multi-threaded cores to be fine-grained multi-threading rather then SMT but thats a different story altogether). While IBM's power5 offers these same advantages (dual core, 2 way SMT cores) this is 4 threads per processor and not overly impressive.
The Niagra chip in comparison to IBM (and upcoming Intel dualcore/SMT designs) is based on the assumption that at higher clock speeds the cpu is rarely fully utitlized (while the P4 can retire up to 3 instructions per cycle many apps, particularly data-intensive apps have an IPC of less than 1). The chip contains 8 cores with 4 threads being executed on each core. This means 32 threads can run concurrently. Sure no single thread will run as fast as it would on a NetBurst, athlon64, or power chip, but the combined throughput is enormous. Assuming each runs at ~ 1/4 the speed of their counterpart, that still gives us 8 threads on a single chip. This is enormous, and will have a major impact on database design (I'm currently doing research on SMT's effect on database algorithms) and the payoffs can be great (as can standard prefetching).
I wouldn't reccomend writing off CMT as a marketing buzzword etc. The era of throughput computing is upon us, lets just hope Oracle and the other per-processor vendors change their liscencing to something that correlates with TPC performance or some other metric that still has meaning, otherwise companies are better off with a couple massively parallel single core chips that cost a whole lot more and generate a whole lot more power for the performance they produce.
Phil
Re:This is just Multi-core processing... (Score:3, Informative)
Sun's new chips (Score:3, Interesting)
And we're not talking about "HyperThreading" where one of the CPUs is virtual. It's a real execution unit.
And Intel and AMD are talking about dual-cores?
This should help save space and energy (both in the power needed to run the box, and in running the cooling system).
I buy the thoughtput / speed arguement, but... (Score:4, Informative)
We have been at the thoughtput is good enough point for several years. In truth, this is old news really. I've got IRIX servers doing lots of things plenty fast, clipping along at a brisk 400Mhz. There is not much you can't do with that, particularly when running a nice NUMA box.
I assume the same holds true for SUN gear. (I think their NUMA performance is a bit lower than the SGI, but I also don't think it matters for a lot of enterprise stuff.)
One application I have running, NUMA style, is MCAD. It's cool in that I have one copy of the software serving about 25 users, running on a nice NUMA server that never breaks. Admin is almost zero, except for the little things that happen from time to time --mostly user related.
However, I'm going to have to migrate this to a win32 platform. (And yes, it's gonna suck.) Why? The peak CPU power available to me is not enough for very large datasets and I cannot easily make the data portable for roaming users. (If there were more MCAD on Linux, I could do this, alas...)
Love it or hate it, the hot running, inefficient Intel / AMD cpu delivers more peak compute than any high I/O UNIX platform does. And it's cheap.
Sun is stating the obvious with the whole I/O thing, IMHO. In doing so, they avoid a core problem; namely, peak compute is not an option under commercial UNIX that needs to be. (And where it is, there are no applications, or the cost is just too high...)
This is where Linux is really important. It runs on the fast CPU's, but also is plenty UNIXey to allow smart admins to capture the benefits multi-user computing can provide.
Linux rocks, so does Solaris, IRIX, etc... The difference is that I can get IRIX & solaris applications.
WISH THAT WOULD CHANGE FASTER THAN IT CURRENTLY IS.
what MT means to developers (Score:4, Interesting)
What I've always done is code extremely defensively:
1. make the various threads data-independent enough to be free-running and only co-ordinate at the start and finish of a thread's activity. If necessary, re-architect everything in sight to make this possible.
2. when interaction is required, get a nice big coarse-grained lock and do everything that needs to be done and get it over with. profile it; there's a good chance it'll be over with quickly enough that it won't erase gains from parallelism or at least you can see what's taking so long and move it outside the lock.
3. do TONS of load testing with lots of big files and random data. thread-related bugs can often hide for years in your code. Unlike divide by zero or null pointer references, a thread bug won't necessarily give any kind of hardware fault or exception. You have to go hunt for the bugs, they won't just pop up and say hi here i am.
4. If you have multiple people of various technical abilities working on the code, you should add a grep/sed script to your makefile to check for accidental introduction of mt-unsafe library calls (strtok, ctime, etc). Flag new monitors and locks for review. Warn about dumb things like using static or global variables.
5. Last trick is to use a layer to allow your program to be compiled for fork/wait, pthread_create/pthread_join, or just plain old co-routine execution (esp if there is a socket you can set to non-blocking). In addition to being able to test your code for correctness in various situations, you also have a baseline to see if the multithreading is an actual improvement.
With the obvious exceptions for embarassingly parallel algorithms, I've found that humdrum client/server or middleware stuff:
(a) gets only marginal gains from multithreading
(b) you have to work for it--profiling and tuning are still required to get top-notch performance
(c) effectient scaling beyond a handful of threads is the exception not the rule. If you have more threads than CPU's, it's a simple fact that some of them are going to be waiting and then your scaling is done.
developers (Score:2)
http://www.dcs.ed.ac.uk/home/stg/pub/P/par_alg.htm l/ [ed.ac.uk]
http://www.informit.com/articles/article.asp?p=366 887&rl=1/ [informit.com]
http://static.cray-cyber.org/Documentation/Vector_ P.pdf [cray-cyber.org]
FTA/RAIABSF instead of FT/RAID? (Score:2)
"...the execution of multiple simultaneous tasks - even on a single processor."
"Chip multi-threading (CMT) brings to hardware the concept of multi-threading, similar to software multi-threading..."
"A CMT-enabled processor, similar to software multi-threading..."
"...CMT processors, software threads can be executed simultaneously..."
"...executes many software threads
Re:FTA/RAIABSF instead of FT/RAID? (Score:2)
That's where they are hoodwinking you. A single processor machine hasn't any ability to process multiple instructions in the same clock cycle. Physically impossible as there is only one path or pipeline through the actual 'core' of the processor. Only one instruction can be physically processed at any given point in time. It's a physical limitation of a single CP
Re:FTA/RAIABSF instead of FT/RAID? (Score:3, Insightful)
Ah, but when you have one physical 'chip' that actually consists of four processor cores, you *can* do four simultanious tasks on one processor.
The advantage over good old fashioned SMP? Well, probably the interconnect is way faster, and if the cores all share some cache or something, sibling threads should see some benefit.
Hype-threading, fer sure (Score:2)
HyperRAM Technology! (Score:3, Interesting)
So maybe it would be more efficent for people to stop screwing around with new processor design ideas for a while and put a little effort in doubling the speed of memory access (and I don't mean by using level whatever caches). Selling motherboards with a faster memory bus would be easy, just give it a cool sounding name kind of like Sega's "Blast Processing". Let's call it "HyperRAM Technology!"
Paper on multithreading (Score:3, Interesting)
http://www.annexia.org/tmp/multithreading.ps [annexia.org]
Rich.
Re:i dont use multithreading (Score:5, Insightful)
Well, if your data conversions are independent, multithreading might be of benefit to you if you have a hyperthreading processor.
And are you sure you are maxing the processor? Surely you have to wait for disk or network, at least some of the time. If more than 10% or so (number pulled from ass but based on empirical observations) of you time is spent waiting for latent devices, you can benefit from multithreading even on a plain vanilla single CPU system with no hyperthreading.
Re:i dont use multithreading (Score:3, Insightful)
Well, if your data conversions are independent, multithreading might be of benefit to you if you have a hyperthreading processor.
Unless the two execution states overflow your L1 cache, in which case a HT CPU could run slower.
Re:i dont use multithreading (Score:2)
I was under the impression that most HT cores had double the L1 cache to ameliorate this exact problem, but perhaps I haven't done my research.
And if they DON'T, they sure as hell SHOULD.
Also, I've always thought it would be cool to allow a process/thread to allocate a certain portion of the cache for its exclusive use. I understand that this would make the caching logic insanely complicated, however.
Re:i dont use multithreading (Score:3, Informative)
Re:i dont use multithreading (Score:2)
That's precisely my point. One of those "other tasks" is another thread running your conversion. If you are maxing a CPU for long periods of time, you should be running on a dedicated box.
Wasn't the purpose of a DMA channel to mode data from one location to another while the CPU is performing other tasks?
Uhh, yes... But cle
Re:i dont use multithreading (Score:3, Informative)
Your using the wrong word in there. Where you use the word "thread", you should be using the work "process" in UNIX parlance. What you are describing is "multi-tasking" in roughly a generic sense. It wasn't invented with the i386, try sometime in the 1960's (I'd have to crack out an OS book to be sure of the date).
Threads are different then processes.
Fundamentally, the standard definition of a thread is: "
Re:i dont use multithreading (Score:2, Interesting)
Also, be careful that you take the working set into consideration. Suppose you had one processor with 1M L2 cache but your problem needed 1.5M data to work on. It runs at
Re:How is this different (Score:2, Informative)
Re:Bad bad English headline (Score:2)
Wrong. (Score:2)
Can mean any of the following:
"What is"
"What does"
"What has"
So the title of this post can validly be read
"What is it Mean to Developers?"
So the answer can validly be stated as
"Yeah, it's real mean to developers".
Go on, look it up [techwr-l.com].
Re:Wrong. (Score:2)
Re:Wrong. (Score:2)
You might interpret "aloominum" as the correct pronunciation of "aluminium". Regardless of how you might then pronounce "condominium" or "plutonium", your interpretation is still valid.
As is mine.
Re:Wrong. (Score:3, Insightful)
When a person says something, the intended meaning is not ambiguous (unless you are a poet), although the words used to describe that meaning may be.
In this case it was intended to mean "What does it mean" and absolutely nothing else, your grammatical writhings notwithstanding.
Re:Count Them Out? (Score:2)
90% of the poster think SUN is going the way of the dodo.
Of course most of these 90% have never worked in a enterprise environment where they can see some of the advantages of Solaris, Aix, HPUX, etc..
Re:Hyperthreading (Score:3, Insightful)
Go figure.
Re:Hyperthreading (Score:3, Funny)
Re:Hyperthreading (Score:2)