Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Sun Microsystems Hardware

Multithreading - What's it Mean to Developers? 357

sysadmn writes "Yet another reason not to count Sun out: Chip Multithreading. CMT, as Sun calls it, is the use of hardware to assist in the execution of multiple simultaneous tasks - even on a single processor. This excellent tutorial on Sun's Developer site explains the technology, and why throughput has become more important than absolute speed in the enterprise. From the intro: Chip multi-threading (CMT) brings to hardware the concept of multi-threading, similar to software multi-threading. ... A CMT-enabled processor, similar to software multi-threading, executes many software threads simultaneously within a processor on cores. So in a system with CMT processors, software threads can be executed simultaneously within one processor or across many processors. Executing software threads simultaneously within a single processor increases a processor's efficiency as wait latencies are minimized. "
This discussion has been archived. No new comments can be posted.

Multithreading - What's it Mean to Developers?

Comments Filter:
  • by Anonymous Coward
    How long has hyperthreading been available on Intel CPU's?
    • by 1010011010 ( 53039 ) on Monday March 14, 2005 @02:00PM (#11934288) Homepage
      1.3 Simultaneous Multi-Threading

      Simultaneous multi-threading [15],[16],[17] uses hardware threads layered on top of a core to execute instructions from multiple threads. The hardware threads consist of all the different registers to keep track of a thread execution state. These hardware threads are also called logical processors. The logical processors can process instructions from multiple software thread streams simultaneously on a core, as compared to a CMP processor with hardware threads where instructions from only one thread are processed on a core.

      SMT processors have a L1 cache per logical processor while the L2 and L3 cache is usually shared. The L2 cache is usually on the processor with the L3 off the processor. SMT processors usually have logic for ILP as well as TLP. The core is is not only usually multi-issue for a single thread, but can simultaneously process multiple streams of instructions from multiple software threads.

      1.4 Chip Multi-Threading

      Chip multi-threading encompasses the techniques of CMP, CMP with hardware threads, and SMT to improve the instructions processed per cycle. To increase the number of instructions processed per cycle, CMT uses TLP [8] (as in Figure 6) as well as ILP (see Figure 5). ILP exploits parallelism within a single thread using compiler and processor technology to simultaneously execute independent instructions from a single thread. There is a limit to the ILP [1],[12],[18] that can be found and executed within a single thread. TLP can be used to improve on ILP by executing parallel tasks from multiple threads simultaneously [18],[19].

    • Re: (Score:3, Informative)

      Comment removed based on user account deletion
    • by johnhennessy ( 94737 ) on Monday March 14, 2005 @02:10PM (#11934406)
      There are some significant differences between hyperthreading and Suns approach.

      Tiny amount of background:

      Hardest part when trying to run things in parallel is figuring out what you can run in parallel. Example: two operations (pseudocode): c=a+b and d+c+e. These two cannot be run in parallel, since you need to result of a+b before you can start c+e.

      With modern operating systems there are many programs running at one time, and they may contain seperate threads. One assumption of threading is that threads can run asynchronously to one another - you will not get a situtation like that above (okay, okay, I'm simplying!).

      With Hyperthreading, Intel gets the CPU to pretend to the OS that there are actually two of them. They duplicate the fetch and decode units, but only use one execute unit - which probably has several FPUs and Integer units. They rely on an FPU or an Integer unit being available to be able to get a performance benefit.

      So Intel (up til now) have duplicated the fetch and decode, but still had the same execute unit.

      Suns approach is to replicate the whole pipeline - fetch, decode, execute. Intel can't really scale hyperthreading beyond two "processors", whereas Sun are aiming to try and execute 8, 16 or even more at one time.

      Because of Intels architecture they can't really scale hyperthreading in this way - for lots of reasons. I'm sure other people can add them.

      This really won't be of huge benefit to your Doom3 FPS, but for business apps (think J2EE) or message queues or science applications it will allow compute servers to scale better at heavy loads (i.e. when lots of threads are doing something that isn't IO bound, at the same time).

      • by phorm ( 591458 ) on Monday March 14, 2005 @03:20PM (#11935238) Journal
        Actually, when you think about it an improved threading model would actually strongly benefit well-programmed games. Why? Because there are a lot of semi-related processes occuring. Sound, graphics, physics, etc etc... they're all part of the game but work in very different ways.

        Now if you're working with a multithreaded CPU, one processor can be handling your CPU-bound graphics work (much of this is handed off to the video card anyhow), another can be doing sound/surround mixing, etc.

        In an FPS with complicated AI, you could theoretically hand that off to CPU #2 while #1 is handling different things. Your graphics engine might not have ugly-mofo-alien #235 onscreen to render, but meanwhile he's watching you and looking for a boulder that will offer him good cover to snipe you from instead of just sitting like a drone waiting for a computer-acurate headshot.

        Now let's say that PC's going multi-CPU. Maybe you don't need a single superpowerful processor, just a videocard and a few lower-powerful processors. Processor #1 is handing off the environmental data, #2 is prepping it for rendering and shovelling your GPU full of vertices, #3 is playing pinpoint surround for that cricket chirping behind the rock on your far left, and #4 is doing AI for ugly alien mofo #287.

        When I think about how games are advancing a lot can come down to interprocess communications and/or bandwidth limitations. The GPU still handles much of the video stuff so your CPU isn't really a bottleneck there in many cases, but as internet connections speed up then you're going to have MMORPGs, FPS's, and more chock full of "actors" that make up sight, sound, physics, and AI that could very well benefit from more CPU's rather than extra ticks on your overclocked single processor.

        After all, eye-candy is only a part of realism. True realism is also very much about a multitude of things happening at once.
      • by Anonymous Coward on Monday March 14, 2005 @04:09PM (#11935841)
        Hyperthread DOES NOT HAVE ADDITIONAL FETCH and DECODE, it just permits 2 different threads to occupy the the reorder buffer thus reducing penalties as a result of a context switch, so instead of a context switch the CPU fools the OS into thinking it can issue two threads of instruction simultaneously. So fetch is designed to switch between instruction memory locations based on a turn system, so it really starts work on one thread and then in the next cycle begins work on that thread. It keeps 2 separate rename tables one for each instruction, and keeps track of which thread a given instruction is. So essentially execute is the same even the reorder buffer is almost the same but it tracks which thread an op is running on. The tricky part is getting the front end to toggle correctly between the 2 regfiles and the 2 rename tables. Also fetching from different threads of control is also tricky, I think some sort of queue is used.

        Fyi, hyperthreading is used on intel because the number of instructions in-flight. The processor during a context switch interupts, saves to the stack, clears out the REGFILE, rename table, and the ROB, losing all the work accomplished that is not written back to the Regfile. So on an AMD processor this is not a huge deal, but on the P4 this is a problem because the frequent context switches that occur on modern systems cause the intel design to lose the advantage of having many instructions in flight. AMD could realize performance gains just not as much and at the cost of clockspeed.

        As for CMT, no it is essentially hyperthreading but could be a better, more costly, more effective design than intels simple design. Duplication of a pipeline is a multicore chip which Sun is doing with Niagra.
      • by lachlan76 ( 770870 ) on Tuesday March 15, 2005 @04:44AM (#11941454)
        ardest part when trying to run things in parallel is figuring out what you can run in parallel. Example: two operations (pseudocode): c=a+b and d+c+e. These two cannot be run in parallel, since you need to result of a+b before you can start c+e.

        Not at all, because you can add d+e. For example:
        No multithreading:
        add a,b
        add d,a
        add d,e

        With multithreading:
        First thread:
        add a,b
        ;Make sure other thread is finished
        add a,d

        Second thread:
        add d,e
    • by Anonymous Coward

      The moderators are on crack, today. Intel's hyperthreading is more of a marketing gimmik (which you fell for). It provides, what, a few percent improvement in performance?

      The fact is that Intel's Pentiums spend most of their time _not_doing_anything_at_all_. They just sit their waiting on data.

      Sun's Niagara will be able to queue 32-threads simultaneously, which 8 of those threads computing (8 cores). My guess is that Sun's analysis showed that, on average, three threads are waiting on memory while one
      • Actually, Intel's research (before HT became reality) said that on average, the instruction decoder was issuing just under 2.5 instructions per tick out of a maximum of 3... so instruction decoder throughput in single-threaded mode is about 75% of maximum.

        On AMD's side, the decoder has quadruple outputs and IIRC, AMD's average is 3 out of 4 so again 75% from maximum.

        By adding SMT, Intel gave the P4 the potential to keep all instruction ports busy and AMD plans to do the same next year... a single-core A64
    • Since the Pentium 4 according to Intel [intel.com], but it's not a good question as that's Intel's trademarked term for their two-thread implementation of simultaneous multithreading [wikipedia.org]:

      Simultaneous multithreading allows multiple threads to execute different instructions in the same clock cycle, using the execution units that the first thread left spare.

      By contrast, Niagara is implementing Chip-level multiprocessing [wikipedia.org]:

      CMP is SMP implemented on a single VLSI integrated circuit. Multiple processor cores (multicore) typically

  • it means a lot (Score:4, Informative)

    by Anonymous Coward on Monday March 14, 2005 @01:43PM (#11934042)
    I am a developper, mainly in C, and I did a lot of programation on QNX4 with multi-threading (even if QNX4 implantation is not *really* threads), now I am doing it in Precise/MQX.
    Multi-threading comes with synchronization, semaphore, mutex, etc, once you know how to deal with them, it's easy.
    • once you know how to deal with them, it's easy

      that's all well and good from a developer standpoint. but for the end user, the problem is going to be software availability.

      witness altivec: apple's vector processing [apple.com] promised to offer all sorts of wild and crazy performance gains, but the prospect of massive refactoring of existing codebases prevented it from being widely adopted. the result is that even though your spiffy new g5 has altivec under the hood, aside from photoshop, there isn't really any soft

      • Re:it means a lot (Score:3, Informative)

        by BigSven ( 57510 )
        The CVS version of GIMP has Altivec support. That makes it two applications already ;)
      • Re:it means a lot (Score:3, Informative)

        by moonbender ( 547943 )
        I'm not an Apple geek, but from what I read here, OS X itself makes use of AltiVec everywhere it makes sense. That's one application everyone will run 100% of the time. Also, Apple's libraries many/some applications use are optimised for AltiVec. From the sounds of it, AltiVec is used more than its x86 counterparts.
        • Re:it means a lot (Score:2, Informative)

          by Anonymous Coward
          Recent versions of the GNU Compiler Collection, IBM Visual Age Compiler and other compilers provide intrinsics to access AltiVec instructions directly from C and C++ programs.
      • the problem is going to be software availability.

        Nope. Niagara is SPARC and will run Solaris. Just like any other Sun server.
        • Re:it means a lot (Score:4, Informative)

          by MORTAR_COMBAT! ( 589963 ) on Monday March 14, 2005 @03:22PM (#11935262)
          exactly, and unlike Altivec, there are no "special instructions" to get benefits from Niagara -- just synchronization, deadlock, and such parallel processing issues which most enterprise software is already aware of.

          (to dumb it down: no new opcodes, existing software will benefit if it can, break if it was poorly written to begin with.)
    • Re:it means a lot (Score:5, Insightful)

      by Waffle Iron ( 339739 ) on Monday March 14, 2005 @02:06PM (#11934356)
      Multi-threading comes with synchronization, semaphore, mutex, etc, once you know how to deal with them, it's easy.

      I know how to deal with them. It may seem easy at first, but it's actually very hard. Your program can run for days before a thread synchronization bug surfaces and it finally deadlocks. And since it's timing dependent, you can't reproduce it.

      In principle there are rules to follow to avoid deadlocks and race conditions, but since they need to be manually enforced, there's always potential for error. At least with memory access bugs the hardware often shows you a segfault; with synchronization problems you usually don't even get that.

      I've learned over the years that preemptive multithreading should be used only as a last resort, and even then, it's best to put exactly one synchronization point in the entire app. Self-contained tasks should be dispatched from that point and deliver their results back with little or no interaction with the other threads.

      The worst thing you can do is randomly sprinkle a bunch of semaphores, mutexes, etc. all over your app.

      • Re:it means a lot (Score:2, Interesting)

        by fitten ( 521191 )
        That's fine for producer/consumer type problems, but there are other types of problems that don't lend themselves to that model.

        I've been programming multithreaded code for a while, too, and giant locking (which is what you describe) is not very efficient much of the time for what I've done in the past. Linux and Solaris had this type of architecture for the kernel at one time and they've long since evolved away from that.

        In short, how you use threads really depends on what you are trying to do. Hammeri
      • Re:it means a lot (Score:5, Interesting)

        by leonmergen ( 807379 ) * <lmergenNO@SPAMgmail.com> on Monday March 14, 2005 @02:17PM (#11934507) Homepage

        I've learned over the years that preemptive multithreading should be used only as a last resort, and even then, it's best to put exactly one synchronization point in the entire app. Self-contained tasks should be dispatched from that point and deliver their results back with little or no interaction with the other threads.

        Exactly, and that's where design patterns come into play... many of these problems have been formally described in patterns you can follow to avoid this; with thread synchronization, you can use the Half-Sync/Half-Async pattern for example, and you can make a task an Active Object so it can deliver its own results...

        Multi-Threaded programming is hard, very hard; but you're not alone who thinks it's hard, and many researchers have formally described a bunch of rules you can follow... if you follow these rules, you often enough eliminate most of the more complicated problems.

        • Following design patterns like they're some kind of formal rules to always obey are a sure way to disaster. Design patterns are a kind of generalized solution to a particular problem, and they are intended as a starting point for application to your specific problem.

          There are several patterns that are very useful for safe multithreaded programming. Properly applied they can greatly reduce the risks of multithreaded programming while reap some of the benefits.

          Multi-threaded programming in complex applica


      • I disagree. Multithreading is very important for virtually every major business (think j2ee/server) app around, even GUI apps shouldn't be doing much work in the GUI thread.

        However, you are right that you need to be very careful. I would reccomend trying to cut your program into well defined modules (OO programming, coming back again), and then attempt to make each of them as atomic as possible. Also, be careful of callbacks. It's best to only make callbacks with threads that you know for sure cannot be ho
      • Re:it means a lot (Score:5, Insightful)

        by guitaristx ( 791223 ) on Monday March 14, 2005 @02:34PM (#11934685) Journal
        As far as threading is concerned, one of the few languages I've dealt with that makes mutexes, semaphores, etc. easy to deal with is Java. Most other languages bury the stuff too deep into the proprietary APIs to make them useful. Consider multithreading in win32 [microsoft.com]. We need better programming languages before we can ever start reaping the benefits of good multithreading hardware.

        Furthermore, we need to get rid of lazy programming. I'm tired of watching people write slow, lazy, inefficient (in terms of both memory space AND speed) code, and justify its existence with "it'll run fast on the new über-hyper-monkey-quadruple-bucky processors." Too many times, the problem is that you've got slow code running in every thread. If the code wasn't so damned lazy, programmers would care more about nifty new hardware. We're not even coming close to using our current hardware to capacity. I've got a 1.2GHz processor with 1024Mb of RAM, and my box chugs opening an M$ Word doc?! WTF?!

        <soapbox>
        Most programming in the world is very similar to the universal statu$ symbol in the U.S.A. - a big gas-guzzling SUV. It's not like Jane the Soccer Mom really needs 300hp to haul her kids and groceries around town. Similarly, we have lots of lazy code out there that doesn't do much of anything but consume resources and pollute the environment. A nifty new processor feature won't be noticed in the computing world because it won't get used anyway, just like Jane the Soccer Mom wouldn't notice 100 more horsepower. </soapbox>
        • Re:it means a lot (Score:3, Insightful)

          by Homology ( 639438 )
          As far as threading is concerned, one of the few languages I've dealt with that makes mutexes, semaphores, etc. easy to deal with is Java. Most other languages bury the stuff too deep into the proprietary APIs to make them useful. Consider multithreading in win32 [microsoft.com]. We need better programming languages before we can ever start reaping the benefits of good multithreading hardware.

          Pure bullshit.

        • Re:it means a lot (Score:4, Interesting)

          by fupeg ( 653970 ) on Monday March 14, 2005 @04:14PM (#11935898)
          As far as threading is concerned, one of the few languages I've dealt with that makes mutexes, semaphores, etc. easy to deal with is Java
          Umm, ok. Java has always made synchronization easy to get to use. It's never been particularly straightforward, because of Java's interpretive nature and the all the wonderful JIT liberties allowed for JVMs. Just look at all the confusion around double check locking [javaworld.com]. JDK 1.5 is the first version of Java to formally expose semaphores. Now they are "easy" to use just like syncrhonization. Verdict is still out on how easy they are to understand.
          Furthermore, we need to get rid of lazy programming.
          Oh brother, here we go again. Let me guess, you could probably write a multi-threaded database server that supported fully ATOMIC operations and transactionality, would only need 4K of memory, and would be blazingly fast on a 486SX machine, right? Over-optimization pundits are the worst, even worse than design pattern pundits. This has been discussed [slashdot.org] many times before. Fast, buggy code has zero value.
  • by PopeAlien ( 164869 ) on Monday March 14, 2005 @01:44PM (#11934049) Homepage Journal
    I dont mean to look a gift horse in the mouth..

    ..but wouldn't it be even better if it was hyper-multi-threading?
    • active-hyper-multi-threading-gold, baby.

    • ..but wouldn't it be even better if it was hyper-multi-threading?

      Of course - all things are better when they're hyper*. Of course they tend to jump from A to B so quickly everything becomes blurry. Besides, jumping into hyper-multi-threading [isn't] like dusting crops, boy!

      * See Compu-Global-Hyper-Mega-Net.
  • stackless.. (Score:5, Interesting)

    by joeldg ( 518249 ) on Monday March 14, 2005 @01:44PM (#11934053) Homepage
    this makes me wonder what the effect would be on something like stackless python [stackless.com]?
    the whole state pickling concept is pretty cool, and kind of throws threads all over..
  • Nothing new. (Score:3, Interesting)

    by bigtallmofo ( 695287 ) on Monday March 14, 2005 @01:46PM (#11934083)
    This is Sun's Niagara Design [com.com]. The more I learn about it, the more I think that it's nothing that exciting.

    From the lack of non-Sun-supplied buzz regarding this technology, it would appear that many people aren't finding it very exciting.
    • Re:Nothing new. (Score:3, Interesting)

      by zenslug ( 542549 ) *
      The tech is actually pretty good, although it really depends on your application. If you want to run something single-threaded, then the Niagara chip is not going to impress you at all. The speed of the chip is not where its power is. Understand that the name is rather appropriate (i.e. like a river/waterfall): it is not very fast comparatively, but it can handle large volumes very well. Think massively multithreaded uses.
    • Re:Nothing new. (Score:4, Interesting)

      by SunFan ( 845761 ) on Monday March 14, 2005 @02:20PM (#11934543)

      What's not exciting about a 32-way single board computer? You don't have to program for it any differently than a 32-way SMP mainframe. Solaris does the rest for you.
    • Like hell it is (Score:5, Informative)

      by turgid ( 580780 ) on Monday March 14, 2005 @02:35PM (#11934700) Journal
      From the lack of non-Sun-supplied buzz regarding this technology, it would appear that many people aren't finding it very exciting.

      More like none of Sun's competitors have anything which comes remotely close.

      Notice how nearly a year after Sun announced this, intel finally admitted that clock frequency (i.e. gigahertz) isn't everything and that they'd be bringing out dual core processors?

      Niagara has 8 cores each capable of 0-clock cycle latency switching between 4 different thread contexts.

      Who else has working hardware and an OS to go that can do this?

      • My Pentium 4 processor has 2 threads. Linux treats them as 2 processors, and makes full use of them. Yes, it's cool to have 8 cores and 4 threads per core. But this is all about price/performanace. An 8-core chip that shares the cache, VM infrastructure, and memory interface between all cores is going to work best for CPU-intensive tasks that are not also I/O or memory-intensive and can be partitioned into multiple threads easily. Not photorealistic rendering, for example, that requires too much data. And i
    • Because the current big bottleneck is memory latency, either vendors will add more cores and use the memory bandwidth, or they'll scale more and more poorly.

      It makes good sense to fix the bottleneck, because that's where the problem lies. Improving other parts which don't have problems, according to Amdahl, is A Bad Idea (:-))

    • Re:Nothing new. (Score:4, Informative)

      by g0_p ( 613849 ) on Monday March 14, 2005 @02:56PM (#11934913)
      Though in theory the Niagra design is another CMT implementation, its the implementation that is the crux here. CMT theory, has been worked around in academia since 6-8 years I think.

      Here is a very informative article [theinquirer.net] on the Niagara design.

      For the lazy some main points from the article.
      - The Pentium 4 is a single core dual threaded CMT implementation. The Niagara has 8 cores and each core is capable of executing 4 threads.
      - Depending on the model of the application that is executing, a programmer can choose to either utilize it as a single process with multiple threads each mapped on to a hardware thread or as multiple processes mapped to hardware threads. Apart from this, individual cores can also be assigned to an individual process, adding one more level of flexibility.
      - Sharing data between threads on the same core is an L1 read and is extremely fast. Sharing data among threads on separate cores is an L2 read (since L2 is shared among cores)
      - The new chip provides a lot of flexibility in terms of how the programmer wants to allocates hardware threads across software processes or threads. But it looks like programming on it will be difficult unless the operating system provides very good support for it.
  • by Soong ( 7225 ) on Monday March 14, 2005 @01:48PM (#11934109) Homepage Journal
    It means we're going to have to lean to program in parallel. We're going to have to parallelize our data processing and we're going to have to learn synchronization and locking methods.

    This is nothing new. The decreasing returns and impending limits of single threaded processing has been upcoming for a long time now.
    • It means we're going to have to lean to program in parallel.

      Not really. If you've been using SMP servers, what's different about SMP on a chip? Even if you only have a few dozen Apache processes running, Solaris will schedule them onto Niagara just like if you had lots of separate CPUs.

      I don't think this is as big a change as people think. The main advantage will be a super-efficient CPU (50 to 60 watts, IIRC) but with the performance of many regular CPUs (hundreds of watts).
      • Not really. If you've been using SMP servers, what's different about SMP on a chip? Even if you only have a few dozen Apache processes running, Solaris will schedule them onto Niagara just like if you had lots of separate CPUs.

        A crucial difference between a processes and threads are that threads are sharing (concurrently) that same data in the same adress space. So, having many processes are not anything like having multiple threads.

    • I imagine that multithreading is a situation where OOP finally begins to really shine, as the amount of code factoring involved would make it much easier to keep track of when and where you need to be frotzing with synchronization and locking.

      I also imagine that if you can try to line up thread boundaries with object boundaries, the task of avoiding race conditions becomes almost trivial.

      But then, I haven't done much serious multithreaded programming, so maybe I am missing the point. Someone set me strai
  • This is kind of a trivial optimization! Basically, you extend your pthreads library so all the threads within a single shared memory application schedule themselves on cores on the same chip. Big deal! Now if it could figure out how to schedule processes on "adjacent" cpus to optimize their common memory accesses, I'd be more impressed.
    • Perhaps I'm misunderstanding you, but yes, I believe Irix supports this for its ccNUMA machines, where the 'distance' between CPUs (and associated memory) can vary quite a bit. If you've got a single system image running on 10 machines with 2 CPUs apiece, you really don't want it to treat every CPU as adjacent to every memory area.

  • by Anonymous Coward on Monday March 14, 2005 @01:51PM (#11934177)
    Can I still use INKEY in my basic programs? Will multi-threading make it more efficient? Can I actually run a second program on my DOS PC without having to force it as a TSR?
  • marketing handwave (Score:2, Insightful)

    by klossner ( 733867 )
    "throughput has become more important than absolute speed in the enterprise"
    I've been seeing this quote in press releases for three decades. It has always meant "we can't compete on performance so we're going to explain why performance isn't important anymore." The few times my management bought that story, they came to regret it.
  • Thruput ... (Score:2, Funny)

    by foobsr ( 693224 )
    Throughput computing maximizes the throughput per processor and per system. So a processor with multiple cores will be able to increase the throughput by the number of cores per processor. This increase in performance comes at a lower cost, fewer systems, reduced power consumption, and lower maintenance and administration, with increase in reliability due to fewer systems. (from TFA, emphasis mine)

    So it seems they invented a way to linearly scale peformance. WOW! But maybe I misunderstood and the thing i
  • by squarooticus ( 5092 ) on Monday March 14, 2005 @01:58PM (#11934267) Homepage
    Not sure I buy that this "increases a processor's efficiency as wait latencies are minimized". It seems to me that decreasing latency reduces efficiency because you spend a greater percentage of your cycles changing state (overhead) instead of doing useful work. This is why realtime OS'es aren't the norm: they reduce latencies to critical maximums, but at the cost of overall throughput.
    • A processor's wait latency is the time it spends doing absolutely nothing while it waits for an external device to catch up. If your RAM latency is around 100 cycles, and context switching costs you 100 cycles, you're right in saying that efficiency goes down. On the other hand, if each context switch costs you 10 cycles, you can context switch nine times before you've started to lose efficiency.

      Sun are putting in hardware to ensure that context switches are fast (possibly even one or two cycles); hopefull

    • Actually, that's the whole point of this technology: there is no expensive context switch between threads. The processor goes along, issuing instructions from several threads, and when it gets a cache miss for one of the threads, it just keeps chuging along, issuing instructions from the other threads.

      Skiming the article, it doesn't even seem this processor bothers with out-of-order execution or register renaming; if it stalls, it just starts issuing from a different thread.

  • by pla ( 258480 ) on Monday March 14, 2005 @02:00PM (#11934281) Journal
    It means "Difficult to reproduce bugs".

    It worries me how many people just say "it means faster programs and doesn't take much more work". That mindset leads to lazy programmers who A - Can't optimize to save their jobs; and B - Don't actually understand what multithreading really does.

    If you consider it easy, you've either just thrown great big global locks on most of your code, in which case your code doesn't actually parallelize well; or you've written what I refer to in my first sentence - Bugs that take an immense effort just to reproduce, nevermind track down and fix.
    • If the tasks are unrelated and share no data then getting then you can get them to run in parallel with only a small decrease in reliability/increase in cost. This is a typical case for a web application serving multiple independant clients (you are reading your mail, and I am reading mine).
  • First of all, threading has always been a great way for programmers to get in over their heads, create very tough bugs, and generally waste development time.

    But thats outside the point - in the new world of very many cheap rackmount servers clustered together, loose coupling has taken over. Maybe if the world had turned out differently and was dominated by big servers, threading would have caught on.

  • Hyperthreading (Score:2, Interesting)

    As many others have already pointed out, Intel has had Hyperthreading available in Pentium 4 and Xeon CPUs for a couple of years now, which does exactly what the article is talking about.

    I was skeptical at first, and read some of those articles showing that some applications could actually run slower. But then I tried it for myself, and I have to admit I've been impressed. My main box is a dual-Xeon, each with Hyperthreading turned on. It appears to Linux as if I have four independent CPUs. A few numer
    • by CaptainPinko ( 753849 ) on Monday March 14, 2005 @02:20PM (#11934538)

      As many others have already pointed out, Intel has had Hyperthreading available in Pentium 4 and Xeon CPUs for a couple of years now, which does exactly what the article is talking about.

      As many others know, you know exactly nothing about what you are talking about. HT has basically two sets of registers so that during a cache miss which would cuase a bubble the chip switches to the other set so it doesn't sit idle. Suns chip on the other hand actually have multiple corses physically doing work at the same time. In fact were it not for Intel's hideously flawed NetBurst architecture the hideous hack that is HyperThreading would not provide any preformance increase at all (in fact it doesn't as much provide an increase as much as negate a decrease...). For evidence consider how many Pentium Ms have HT on them... Now I may not be fully correct but I didn't volunteer a comment; I only posted to prevent the misinformation of others. You'll find more on ArsTechnica [arstechnica.com]. I'd link to the article but I can't find anything on their redesigned site.

      • by at_18 ( 224304 )
        As many others know, you know exactly nothing about what you are talking about.

        Dude, you don't know anything either. P4's hyperthreading is a two-threads implementation of Simultaneous multithreading [wikipedia.org]. Niagara is an 8-way multiprocessor on a chip, and each processor has four-way simultaneous multithreading, exactly like the P4, just with more threads.

        Regarding the amount of concurrent threads, it's basically equivalent to a 16-way Xeon server with hyperthreading enabled, but with much faster inter-process
    • Re:Hyperthreading (Score:3, Interesting)

      by PitaBred ( 632671 )
      Try make -j5 or -j6. Tends to have better results than the -j4 on my dual Xeon rig. And yes, I have benchmarked it.
    • Re:Hyperthreading (Score:5, Interesting)

      by SunFan ( 845761 ) on Monday March 14, 2005 @02:27PM (#11934616)

      "Intel has had Hyperthreading available in Pentium 4 and Xeon CPUs for a couple of years now, which does exactly what the article is talking about"

      You are wrong. Period. Sun's CMT is several independent CPU cores on the same die with a huge bandwidth interconnect on-die. Intel's Hyperthreading is a gimmicky technology that has a very small real-world impact on performance.

      And your personal "benchmarks" cite no numbers. I be trolled!

  • In most cases (Score:5, Informative)

    by Z00L00K ( 682162 ) on Monday March 14, 2005 @02:06PM (#11934355) Homepage Journal
    multithreading hardware will not mean much, but in some cases it may mean a lot for performance. (with most cases I mean users running Word/Excel/Powerpoint/likewise)

    The real issue is how large each thread can be (in the matter of memory) before it has to access data that is external to the thread. It may mean a lot for gamers running close to reality games and also for those that are doing massive calculations.

    The important thing is that developers has to be aware of the possibilities and limitations around this technology. Otherwise it would be like throwing a V8 into a T-Ford. It is possible, but you would never be able to utilize the full power.

    Another thing is that todays programming languages are limited. C (and C++) are advanced macro assemblers (not really bad, but it requires a lot of the programmer). Java has thread support, but it's still the programmer (in most cases) that has to decide. Java is not very efficient either, which of course is depending on which platform it's running on in combination with general optimizations. C# is Microsoft's bastard of Java and C++ with the same drawbacks as Java.

    There are other languages, but most of them are either too obscure (like Erlang [erlang.org] or Prolog [gnu.org]) or too unknown.

    The point is that a compiler shall be able to break out separate threads and/or processes whenever possible to improve performance. It is of course necessary for the programmer to hint the compiler where it may do this and where it shouldn't, but in any way try to keep the programmer luckily unknowing about the details. The details may depend on the actual system where the application is running. i.e. if the system is busy with serving a bunch of users then the splitting of the application into a bunch of threads is ot really what you want, but if you are running alone (or almost alone) then the application should be permitted to allocate more resources. The key is that the allocation has to be dynamic.

    Anybody knowing of any better languages?

  • CSP and libthread (Score:5, Informative)

    by CondeZer0 ( 158969 ) on Monday March 14, 2005 @02:08PM (#11934393) Homepage
    This is what it means for me: http://www.cs.bell-labs.com/who/rsc/thread/ [bell-labs.com]

    Also see Brian W. Kernighan's "A Descent into Limbo" [vitanuova.com] and Dennis M. Ritchie's "The Limbo Programming Language" [vitanuova.com].

    And of course Hoare's classic: Communicating Sequential Processes [usingcsp.com].

    Now you can enjoy the power and beauty of the CSP model in Linux and other Unixes thanks to plan9port [swtch.com] including libthread and Inferno [vitanuova.com]; yes, it's all Open Source.
  • I designed pretty much the same concept in the OpenCore "FLOOP" project I'm working on. You just have an array of registers, such that each thread's state is maintained and is directly referrable.


    Actually, the "best" way to implement the design is to split the thread state from the processing elements, then use locking on the elements. If two threads use independent processor elements, they should be simultaneously executable.


    By having many instances of the more common processing elements, you would have many of the benefits of "multi-core" (in that you'd have parallel execution in the general case) but the design would be much simpler because you're working at the element level, not the core level.


    Yes, none of this is really any different from hyperthreading, multi-core, or any other parallel schemes. All parallel schemes work in essentially the same way, because they all need to preserve states and lock resources.


    Personally, I think REAL Parallel Processing CPUs that can handle multiple threads efficiently are already well-enough understood, they just have to become reasonably mainstream.


    For myself, I am much more interested in AMD's Hyper Tunneling bus technology, which looks like it could supplant most of the other bus designs out there.

  • by PHAEDRU5 ( 213667 ) <instascreed.gmail@com> on Monday March 14, 2005 @02:14PM (#11934451) Homepage
    Since I mostly work on J2EE stuff, I let the container take care of the threading for me. The one exception is J2EE Connector Architecture (JCA) bits that use the work manager. Even there, however, most of my work is simply putting a thin JCA layer in place between the outside world and the J2EE stack.

    For me, these new chips simply mean increased performance for deployed apps, without any modification to the app code.

    Beauty!
  • Intel was the first to impliment this, true, but sun has a history of great SMP support in both software and hardware; multiprocessing/multithreading could be a widly accepted trend in the industry and sun could take the lead. For more information on SMT/CMT, read up:


    http://en.wikipedia.org/wiki/Simultaneous_multithr eading/ [wikipedia.org]

  • by mzito ( 5482 ) on Monday March 14, 2005 @02:17PM (#11934504) Homepage
    CMT is nothing more than multi-core processors. Sun is using the marketing idea of CMT to hide the fact that the UltraSparc IV is nothing more than two UltraSparc III cores on one chip.

    One way to look at this is Sun maximizing their existing engineering efforts. However, by marketing it as some revolutionary feature advance, they're implying that they've done something new and exciting, as opposed to something that IBM is already doing and AMD and Intel are working on.

    Beyond that, Sun and Fujitsu have a co-manufacturing and R&D deal now, confirming something those in the enterprise space have been saying for a long time - Fujitsu was making better Sun servers than Sun.

    Plus Sun killed plans for the UltraSparc V, leaving only the Niagra. They have the Opteron line pushing up from below, and rapidly evaporating sales at the high end. They're resorting to marketing gibberish to add new features to the product line, while simultaneously offloading R&D and manufacturing to a partner.

    Remind me again why Sun is in the hardware business?

    Thanks,
    Matt
    • by mzito ( 5482 ) on Monday March 14, 2005 @02:29PM (#11934637) Homepage
      And actually, this makes me so grumpy that I forgot the whole other piece.

      Despite the fact that Sun markets the UltraSparc IV as a single processor, software licensors like BEA and Oracle require that you license their software PER CORE. This means that a "4 processor" UltraSparc IV requires 8 processor licenses for Oracle or Weblogic.

      Sun never tells you this, and consequently a lot of people suddenly get tagged with additional licenses if they get audited. BEYOND that, Sun tells people that they can "double their performance" by replacing all of their UltraSparc IIIs with UltraSparc IVs, not explaining that they are doubling their performance because they're doubling the number of processors, AND that doing that upgrade can put them on the hook for literally hundreds of thousands of dollars in software cost.

      We've seen a number of companies get bitten by that, and it is downright disingenuous of Sun.

      Thanks,
      Matt
    • I have to agree. These manufacturers are bastardizing terms that we have had a hard enough time establishing and making mean something. This is not multithreading. It is not suited to the term. It is multicored processing. If we allow people like this to continually mixup the termonalogy like this, how are we going to talk to each other intelligently when it comes do to coder A explaining to coder B what they did and how they accomplished it? We won't without argueing out the base terms for the conv
    • by philipgar ( 595691 ) <pcg2 AT lehigh DOT edu> on Monday March 14, 2005 @02:46PM (#11934801) Homepage
      You miss one of the major points in the article, and that is that CMT is not really about the Ultra IV being a fully CMT processor. This is about the Niagra chip. The Niagra chip is truely a CMT processor.

      The reason this is so is because it functions as both a chip multi-processor and as a multi-threaded core (although I think I'd consider their multi-threaded cores to be fine-grained multi-threading rather then SMT but thats a different story altogether). While IBM's power5 offers these same advantages (dual core, 2 way SMT cores) this is 4 threads per processor and not overly impressive.

      The Niagra chip in comparison to IBM (and upcoming Intel dualcore/SMT designs) is based on the assumption that at higher clock speeds the cpu is rarely fully utitlized (while the P4 can retire up to 3 instructions per cycle many apps, particularly data-intensive apps have an IPC of less than 1). The chip contains 8 cores with 4 threads being executed on each core. This means 32 threads can run concurrently. Sure no single thread will run as fast as it would on a NetBurst, athlon64, or power chip, but the combined throughput is enormous. Assuming each runs at ~ 1/4 the speed of their counterpart, that still gives us 8 threads on a single chip. This is enormous, and will have a major impact on database design (I'm currently doing research on SMT's effect on database algorithms) and the payoffs can be great (as can standard prefetching).

      I wouldn't reccomend writing off CMT as a marketing buzzword etc. The era of throughput computing is upon us, lets just hope Oracle and the other per-processor vendors change their liscencing to something that correlates with TPC performance or some other metric that still has meaning, otherwise companies are better off with a couple massively parallel single core chips that cost a whole lot more and generate a whole lot more power for the performance they produce.

      Phil
    • Sun's CNP is modeled after Tera's MTA architecture [cray.com] (now named Cray again), which trades memory latency for throughput. Basically, in MTA (massively threaded architecture) each of 128 processor threads issues a few memory fetch instructions and waits for the memory to arrive (dozens to hundreds of cycles). This happens for every thread so the effect is that memory fetches and execution time are separated... iow time=max(execution,fetch) vs time=exeuction+fetch of normal processors. This also makes having
  • Sun's new chips (Score:3, Interesting)

    by Anonymous Coward on Monday March 14, 2005 @02:24PM (#11934582)
    Sun's upcoming "Niagra" chips are supposed to have eight cores, each core being able to execute four threads. So that allows upto 32 threads executing at once -- on one physical chip.

    And we're not talking about "HyperThreading" where one of the CPUs is virtual. It's a real execution unit.

    And Intel and AMD are talking about dual-cores?

    This should help save space and energy (both in the power needed to run the box, and in running the cooling system).
  • there are still some applications where raw CPU speed matters.

    We have been at the thoughtput is good enough point for several years. In truth, this is old news really. I've got IRIX servers doing lots of things plenty fast, clipping along at a brisk 400Mhz. There is not much you can't do with that, particularly when running a nice NUMA box.

    I assume the same holds true for SUN gear. (I think their NUMA performance is a bit lower than the SGI, but I also don't think it matters for a lot of enterprise stuff.)

    One application I have running, NUMA style, is MCAD. It's cool in that I have one copy of the software serving about 25 users, running on a nice NUMA server that never breaks. Admin is almost zero, except for the little things that happen from time to time --mostly user related.

    However, I'm going to have to migrate this to a win32 platform. (And yes, it's gonna suck.) Why? The peak CPU power available to me is not enough for very large datasets and I cannot easily make the data portable for roaming users. (If there were more MCAD on Linux, I could do this, alas...)

    Love it or hate it, the hot running, inefficient Intel / AMD cpu delivers more peak compute than any high I/O UNIX platform does. And it's cheap.

    Sun is stating the obvious with the whole I/O thing, IMHO. In doing so, they avoid a core problem; namely, peak compute is not an option under commercial UNIX that needs to be. (And where it is, there are no applications, or the cost is just too high...)

    This is where Linux is really important. It runs on the fast CPU's, but also is plenty UNIXey to allow smart admins to capture the benefits multi-user computing can provide.

    Linux rocks, so does Solaris, IRIX, etc... The difference is that I can get IRIX & solaris applications.

    WISH THAT WOULD CHANGE FASTER THAN IT CURRENTLY IS.

  • by fred fleenblat ( 463628 ) on Monday March 14, 2005 @02:35PM (#11934696) Homepage
    While conceptually unrelated, I put threads into the same mental category as untyped pointers. They are extremely powerful, but a complete PITA to debug if anything goes wrong, even moreso if you are maintaining someone else's void* or pthread_create filled application.

    What I've always done is code extremely defensively:
    1. make the various threads data-independent enough to be free-running and only co-ordinate at the start and finish of a thread's activity. If necessary, re-architect everything in sight to make this possible.
    2. when interaction is required, get a nice big coarse-grained lock and do everything that needs to be done and get it over with. profile it; there's a good chance it'll be over with quickly enough that it won't erase gains from parallelism or at least you can see what's taking so long and move it outside the lock.
    3. do TONS of load testing with lots of big files and random data. thread-related bugs can often hide for years in your code. Unlike divide by zero or null pointer references, a thread bug won't necessarily give any kind of hardware fault or exception. You have to go hunt for the bugs, they won't just pop up and say hi here i am.
    4. If you have multiple people of various technical abilities working on the code, you should add a grep/sed script to your makefile to check for accidental introduction of mt-unsafe library calls (strtok, ctime, etc). Flag new monitors and locks for review. Warn about dumb things like using static or global variables.
    5. Last trick is to use a layer to allow your program to be compiled for fork/wait, pthread_create/pthread_join, or just plain old co-routine execution (esp if there is a socket you can set to non-blocking). In addition to being able to test your code for correctness in various situations, you also have a baseline to see if the multithreading is an actual improvement.

    With the obvious exceptions for embarassingly parallel algorithms, I've found that humdrum client/server or middleware stuff:
    (a) gets only marginal gains from multithreading
    (b) you have to work for it--profiling and tuning are still required to get top-notch performance
    (c) effectient scaling beyond a handful of threads is the exception not the rule. If you have more threads than CPU's, it's a simple fact that some of them are going to be waiting and then your scaling is done.
  • Methods for actaully taking advantage of this (and other parallelisation) in your code:

    http://www.dcs.ed.ac.uk/home/stg/pub/P/par_alg.htm l/ [ed.ac.uk]

    http://www.informit.com/articles/article.asp?p=366 887&rl=1/ [informit.com]

    http://static.cray-cyber.org/Documentation/Vector_ P.pdf [cray-cyber.org]

  • On the plus side, whatever you might think of CMT technology, the description given demonstrates the opportunity CMT brings for redundancy:

    "...the execution of multiple simultaneous tasks - even on a single processor."

    "Chip multi-threading (CMT) brings to hardware the concept of multi-threading, similar to software multi-threading..."

    "A CMT-enabled processor, similar to software multi-threading..."

    "...CMT processors, software threads can be executed simultaneously..."

    "...executes many software threads
    • "I could be wrong, but I get the idea that CMT allows you to perform multiple simultaneous software threads, even within a single processor!"

      That's where they are hoodwinking you. A single processor machine hasn't any ability to process multiple instructions in the same clock cycle. Physically impossible as there is only one path or pipeline through the actual 'core' of the processor. Only one instruction can be physically processed at any given point in time. It's a physical limitation of a single CP
      • Ah, but when you have one physical 'chip' that actually consists of four processor cores, you *can* do four simultanious tasks on one processor.

        The advantage over good old fashioned SMP? Well, probably the interconnect is way faster, and if the cores all share some cache or something, sibling threads should see some benefit.

  • It would be nice to have more than hype. IIRC the Intel hyperthreading documents were mostly hype, plus a few very unimpressive benchmarks. When benchmarks by the original company are borderline, a little bell should go off. So now Sun has something similar. We're supposed to buy their new proprietary hardware and rewrite our programs and introduce concurrency bugs? And for what, a few percent improvement? Hmmmm.... Pass..
  • HyperRAM Technology! (Score:3, Interesting)

    by MrNybbles ( 618800 ) on Monday March 14, 2005 @03:26PM (#11935303) Journal
    I think the most interesting part of the article was when it said "Processor speed has increased many times -- it doubles every two years, while memory is still very slow, doubling every six years."

    So maybe it would be more efficent for people to stop screwing around with new processor design ideas for a while and put a little effort in doubling the speed of memory access (and I don't mean by using level whatever caches). Selling motherboards with a faster memory bus would be easy, just give it a cool sounding name kind of like Sega's "Blast Processing". Let's call it "HyperRAM Technology!"
  • by Richard W.M. Jones ( 591125 ) <{rich} {at} {annexia.org}> on Monday March 14, 2005 @03:42PM (#11935482) Homepage
    It's not a particularly new idea. I wrote a pretty detailed paper at university about multithreading. You can read it here:

    http://www.annexia.org/tmp/multithreading.ps [annexia.org]

    Rich.

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...