Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Programming IT Technology

Intel Updates Compilers For Multicore CPUs 208

Threaded writes with news from Ars that Intel has announced major updates to its C++ and Fortran tools. The new compilers are Intel's first that are capable of doing thread-level optimization and auto-vectorization simultaneously in a single pass. "On the data parallelism side, the Intel C++ Compiler and Fortran Professional Editions both sport improved auto-vectorization features that can target Intel's new SSE4 extensions. For thread-level parallelism, the compilers support the use of Intel's Thread Building Blocks for automatic thread-level optimization that takes place simultaneously with auto-vectorization... Intel is encouraging the widespread use of its Intel Threading Tools as an interface to its multicore processors. As the company raises the core count with each generation of new products, it will get harder and harder for programmers to manage the complexity associated with all of that available parallelism. So the Thread Building Blocks are Intel's attempt to insert a stable layer of abstraction between the programmer and the processor so that code scales less painfully with the number of cores."
This discussion has been archived. No new comments can be posted.

Intel Updates Compilers For Multicore CPUs

Comments Filter:
  • by u-bend ( 1095729 ) on Tuesday June 05, 2007 @02:49PM (#19402129) Homepage Journal
    ...briefly translate this article into cretin for me, so that I can understand a bit more of why it's so cool?
    • Re:Anyone want to... (Score:5, Informative)

      by Trigun ( 685027 ) <evil&evilempire,ath,cx> on Tuesday June 05, 2007 @02:54PM (#19402197)
      The compiler worries about the cores so you don't have to. Is that too cretin?
      • by u-bend ( 1095729 )
        haha! Maybe a little too cretin. I might be able to handle information that's a *tiny* bit more technical.
        :)
        Soooo, at the risk of sounding really stupid, wasn't this sort of thing happening with previous compilers?
    • Re:Anyone want to... (Score:5, Informative)

      by BecomingLumberg ( 949374 ) on Tuesday June 05, 2007 @02:55PM (#19402221)
      >>>So the Thread Building Blocks are Intel's attempt to insert a stable layer of abstraction between the programmer and the processor so that code scales less painfully with the number of cores.

      They found a way to make the computer be able to determine how to use its many CPU cores automagically when you compile a program. It is cool, since it is really to figure out how to share a given workload 16 even ways.
      • Sounds like snake oil to me.

        I can't speak for Fortran but what standard C++ mechanisms are there for threading? If they added stuff to the CLR, shouldn't it have gone through the organizations that maintain them? Weird compiler extensions are bad for cross-compatiblity. (Which I guess is the point since Intel compilers -> Intel CPUs -> No other CPU manufacturers).

        Besides, threading is still an OS specific venture. Do these optimizations just work by looking for calls to fork() or the Windows alternati
        • Re:Anyone want to... (Score:5, Informative)

          by James_Intel ( 1082551 ) on Tuesday June 05, 2007 @07:08PM (#19405243)
          Automagical - we try. Vectorization, paralellization - I dare say the Intel compilers are at least as good at it as any compiler ever has been. Bold statement - yeah. I believe it is true.

          A more interesting question is "Is that good enough?" For vectorization, the answer is 'usually' - so some additional work/headaches happen when it isn't enough. For parallelization - the answer is at best 'sometimes.' So I'll get flamed two ways: (1) by people very happy with it - and say that I've understated how good it is - and it is all they need, (2) by people with programs which don't get magical auto-paralleism to solve there needs. There are more people in #2 than #1 - but this ain't a 1-size-fits-all-world. Not a bad deal if it solves you problems - otherwise - you got work to do... but that ain't the compiler's fault... parallelism requires work for most of us.

          About languages...
          Virtually every Fortran, C and C++ compiler these days support OpenMP, which is not part of the official standard - but is there to use. It is loop oriented, and is very Fortran-like and fits into C well enough... but is definitely not C++ like.

          Fortran and C/C++ don't support threading in the language, you need to write your code to be thread-safe, and you need to use a threading package like Windows threads or POSIX threads (pthreads). Boost thread offer a portable interface to hit on the key threading needs - essentially wrappers for pthreads and Windows threads, etc. - the standards are likely to add a portable interface officially in the future. One thing Java did from the start.

          Intel compilers -> Intel CPUs -> all compatible processors
          The Intel compilers and libraries aim to beat other compilers and libraries regardless of the processor it is run on. No one will get it right all the time - so this is not a dare to find single examples of little code sample to prove me wrong. But if a real program doesn't get the best results from Intel - we want to know. (yeap - I work at Intel - I post for myself)
    • Re: (Score:2, Informative)

      essentially the compiler will automatically optimize thread splitting (time and number of splits if I'm reading this correctly) which is very handy feature as it will quickly become nearly impossible to manage future processors with 16+ cores. They do seem to hide a lot of the true features underneath market-speak though.
      • Re: (Score:3, Interesting)

        The compiler will try like crazy to do that - and sometimes it does a great job. Most of the time - you'll have work to do (it won't do it for you). What we've found though - is that anything a programmer can do to express tasks that are splittable - makes the automation more and more possible. OpenMP (11 years old now) has carried that into the multicore world from the world of supercomputing - for loops. Don't have loops? Well, that's there would be a tough one.
        Threading Building Blocks is a good op
    • by Mockylock ( 1087585 ) on Tuesday June 05, 2007 @02:58PM (#19402271) Homepage
      The parallelism of the Compiler Fortran and Professional Edition of the uranium core both sport improved auto-vectorizationalism of the fortran and format that can target Intel's new SSE4 extensionalism. For thread-level parallelismisitic quantum theory, the compilers support the use of Intel's Threadtastic Building Block nationalism for objectionism for automatic thread-level optimizationalism that takes place simultaneously with auto-vectorization of parellel universes... Intel is encouraging the widespread use of its Intel Threading quantum physics parallel vectorizationistic Tools as an interface on the enterprise bridge to its Spock multicore processors. As the parallel company raises the vectorized core count with each multitudinal generation of new vector parallel products, it will get harder and harder for programmers to manage the complexity associated with all of that available parallelismistic forces.

      See, it's not that hard to understand.
    • Re:Anyone want to... (Score:5, Interesting)

      by LWATCDR ( 28044 ) on Tuesday June 05, 2007 @03:09PM (#19402463) Homepage Journal
      SSE4 the latest and greatest vector instruction set from Intel. MMX->SSE->SSE2->SSE3->SSE4. These instructions speed up things like trans-coding video and audio. They are also good for anything that does a lot of Floating-Point. The downside is very few systems have CPUs that support SSE4 and selecting it may hurt systems that don't have SSE4 or the program might not run at all depending on how the compiler is written. My bet is it will degrade gracefully. Over all SSE4 is most useful for people that are writing custom software right now and will become commonplace in off the shelf software once AMD supports it and systems that support it are more common.
      The Threading Building Blocks are yet another attempt to make writing multithreaded code easier. Frankly I don't find pthreads hard but maybe I am just odd.
      Threading is very important because we are not going to see an endless increase in clock speed anymore. Intel, AMD, and IBM are all pushing multiple cores. While adding an extra core or three really does help modern systems at least a little since we are often running multiple tasks current software will not scale as well when the cores start growing in a Moore like fashion. Right now we are at four cores if Moore's law holds in two years we might see eight, then 16, then 32... As you can see it gets out of hand pretty quickly. Your average desktop will not use four cores very well much less eight until software is written to take advantage of more cores.
      Yes I know that Moore said 18 months but I was going for a nice round numbers.

    • I'll give it a try..

      Ugh ugh uhh. *Bang* Huh ugh huh mmmm uh buh ugh Fortran.
  • by Necroman ( 61604 ) on Tuesday June 05, 2007 @02:56PM (#19402251)
    We see Intel mainly as a CPU/chipset maker, but don't pay much attention to their software side. I believe they are one of the largest software development companies in the world. Between drivers, compilers, and all the other goodies to support all their hardware, they spend a lot of time doing software development.

    And as much as they develop compilers to optimize code for Intel CPUs, the code most of the time will also see a speed increase on AMD CPUs as well. Who else do you want developing a compiler but the people who made the hardware it's running on.
    • Re: (Score:3, Insightful)

      by Tribbin ( 565963 )
      "Who else do you want developing a compiler but the people who made the hardware it's running on."

      You mean like nvidia making nvidia drivers for linux?
    • by dmoore ( 2449 ) on Tuesday June 05, 2007 @03:03PM (#19402371)
      I have not tried their compiler, but for the Intel Performance Primitives (IPP), a library of useful MMX/SSE-optimized functions written by Intel, they explicitly fall-back to slow versions of the code if it detects an AMD processor, even if the AMD processor has MMX/SSE/SSE2. This kind of behavior is one reason that you may not want to trust Intel for your compiler needs if you are planning on doing development for more than just Intel-branded CPUs.
      • Re: (Score:3, Funny)

        by Elbereth ( 58257 )
        From the viewpoint of Intel, this is actually good practice. They don't know what features that AMD actually supports (through possibly intentional ignorance), and they don't want to cause someone's system to lock up. While I'd rather see my AMD CPU be supported by Intel's compiler, I can understand why they might be reticent to support certain features, even though the CPU reports support for that feature.

        Anyways, it's not like MMX/SSE are really used for much of anything but benchmarks and voice synthes
        • Try using a VIC 20 or TI 99/4a for a few hours, then tell me how important it is to have your competitor design a compiler that optimizes for your CPU.
          Noob. Try punchcards. At the very least, the Commodore PET. The VIC-20 was an awesomely powerful piece of hardware compared to that.
          • Noob. Try punchcards. At the very least, the Commodore PET. The VIC-20 was an awesomely powerful piece of hardware compared to that.

            Ha! After writing software for the VIC-20 I wound up on a PET one summer and kept getting these 'out of memory' errors. I had no idea why that might be happening (had to ask the prof). :)

            And today 'hello world' can't even fit in 20K.
        • by Wesley Felter ( 138342 ) <wesley@felter.org> on Tuesday June 05, 2007 @04:00PM (#19403255) Homepage
          So if an Intel processor reports SSEn support you assume that it works, but if an AMD processor reports the same feature, you assume that it doesn't work? Great idea.

          This matters because the whole purpose of IPP is to take advantage of newer instructions. If you say "new instructions don't matter because no one uses them" it becomes a self-fulfilling prophecy. Optimized libraries could break out of that cycle, but only if they aren't used as competitive weapons.
        • Re: (Score:3, Insightful)

          by Bert64 ( 520050 )
          On the contrary, they should check for the presence of the appropriate feature, and then use it...
          They should also let you build binaries without those fallback code paths, as a lot of code will never run on older machines (eg x86 macs, which all have at least sse3).
          If someone's system lock up because AMD claimed to support a feature which they dont actually support, that's AMD's fault and intel could claim the moral high ground instead of the other way round.
          • by ravyne ( 858869 )
            I agree with your point that the optimizations should be taken based on a feature-by-feature basis, however its likely that code optimized for Intel's processor extensions might be sub-optimal on AMD's extensions. All these instruction sets like SSE, MMX, even x87 and x86 are essentially specs; the implimentation can and often does differ. Each new core from each vendor will have different latancy and throughput characteristics that will have a bearing on what the optimal code for each platform will look li
        • by jd ( 1658 )
          Maths co-processors... Yeah, I remember those. I wrote a Mandelbrot generator that manipulated the 8087 stack directly, so none of the floating-point values ever had to be transferred to/from main memory. All integers (for loops) were held in 8086 registers for the same reason. It was still slow, but it was a respectable slow.

          IIT's maths co-processor could handle matrix arithmetic directly - it didn't have a simple 1D stack, but a 2D array you could process on. It was also roughly 10x faster than the Inte

        • by Wolfrider ( 856 )
          Man, my 1st computer was a TI 99/4a... We had teh speech synthesizer, and Parsec, Alpine... And a MONO CASSETTE RECORDER to store Extended Basic progs. (Man it was a b1tch when the little POS froze up because of the sh1tty cartridge implementation, tho.)

          I made a Voltron interactive-text game for it back in the day; easiest way to win quickly was just: " FORM BLAZING SWORD ". :)
      • by James_Intel ( 1082551 ) on Tuesday June 05, 2007 @07:39PM (#19405505)
        (Yes - I work for Intel - post for myself - tell it like it is) Cute story if it was true. However - Intel compilers and libraries, are designed to use features - but we don't come out every day with an update. The new compilers support SSE 4, but Intel only. AMD support comes after the processors exist that support it. Libraries aren't quite there yet with SSE 4 (I guess we hate Intel processors too - flame us). But AMD support for SSE 3 is there - now that it is in their processors. It wasn't there when we developed version 9 of the compilers. We do test our compilers/libraries on other implementations - because believe it or not - we care if it works. It doesn't always - and we adjust the compiler/library to make it work. We had a beta a few years ago which blew up on Intel processors and worked on AMD processors (yeap - I said it right - imagine the embarrassment when a customer told us about that combination). Opps. I heard that was because we released support before we tested that it worked on that processor. So we learned not to do that too often. By the time we release product - it should work on all procesors. I would say "does" or "guaranteed to" - but the lawyers would freak - because nothing in life is guaranteed. We are clearly not trying to screw our customers though - you know... the developers who count on our software. It is annoying when people suggest that might be our goal.

        My favorite complaint: Intel checks "CPUID"
        No duh - that's where the feature information is.

        Next favorite: Intel checks for "GenuineIntel".
        Another "no duh" - RTFM from Intel or AMD - the features flags checking has to come AFTER you determine the manufacturer AND family of the processor...
        unless you don't care about running on all processors
        (spare pointing out to be that you can skip the first two checks - look at the SSE flag - and it is usually right - unless say you pick just the right older processor)
        We do the checks the way Intel and AMD manuals say we have to... if that is evil... so be it.
        We even start by testing if the CPUID instruction exists (it didn't before Pentium processors).
    • by Chandon Seldon ( 43083 ) on Tuesday June 05, 2007 @03:10PM (#19402481) Homepage

      It's really useful for a CPU company to develop an optimizing compiler for their hardware. It forces them to understand how their CPU features actually speed up software, and it gives them the opportunity to prove that certain hard optimizations actually work. It would probably be best for everyone if the compiler were open source, but if Intel thinks they need to sell it as a commercial product to justify it financially we still get all of the benefit on their future processor designs.

      • I thought every CPU provider does. Or they make CPUs compatible with existing compilers.
        • All CPU makers make their CPUs compatible with existing compilers - but that completely ignores new instructions like SSE4. For that sort of thing, ether the programmer has to take advantage of it with hand-coded assembly, or someone needs to write a compiler optimized for the new instruction set. If the CPU vendor does it themselves rather than waiting for Microsoft and the GNU project to get around to it they can see results faster and feed information from/to hardware design more quickly and efficiently.

          • Or, they can just contribute to GCC, like I believe Apple did in order to get altivec optimizations in there.
    • by 15Bit ( 940730 )
      You can say the same for most of the other major chip makers - IBM and Sun both do the same, and in years gone by DEC used to make an arse-kicking Fortran compiler for the Alpha. In fact, probably the only major chip producer that doesn't make compilers is AMD.

    • by jimicus ( 737525 )
      Who else do you want developing a compiler but the people who made the hardware it's running on.

      My goodness... you can't mean... that the company which developed the hardware is in a strong position to get a few people from the hardware dev team onto the team developing software for it?! And that these people are well placed to know what's worth optimising, where and how?

      No shit, Sherlock.

      The only amazing thing about this is that it is such a novel insight that it is necessary for you to be modded as such.
      • The only amazing thing about this is that it is such a novel insight that it is necessary for you to be modded as such.

        And yet, historically it has proven to be incorrect. The usual result of getting hardware developers to write compilers is that you get shitty compilers. The amazing reason for this is that people who spend their career writing compilers turn out to be way better at it than people who spend their career developing hardware.

        The Intel compiler is a notable exception - but it wasn't that long ago that code correctness was not that high on the Intel compiler's list of qualities. The code was fast, but no

    • Re: (Score:2, Funny)

      by KingMotley ( 944240 )

      Who else do you want developing a compiler but the people who made the hardware it's running on.
      Who else do you want developing an office suite but the people who made the operating system it's running on.
  • GCC (Score:3, Insightful)

    by Anonymous Coward on Tuesday June 05, 2007 @02:56PM (#19402255)
    Will they add these features to GCC or make docs available so others can?
    • I recently downloaded Intel's compiler to see whether my C++ code would run faster on it--I ended up giving up on it (for now) after spending a day trying to get it to work. I'm sure their compiler has many whizzy features in it, but for me, they don't really matter unless they're in GCC. I hope Intel will realize that it's in their interest to migrate these advances there.
    • No and yes (Score:3, Informative)

      by Sycraft-fu ( 314770 )
      No, they won't add them to GCC. Intel's compiler competes with GCC and it is the best there ever was. In every test I've ever seen on Intel chips, it comes out ahead and I'm sure they've no interest in changing that. However yes, the docs are out there. Intel's processors are extremely well documented and you can get everything you need. The problem isn't that the GCC people are having to guess how the processors work, the problem is that their coders aren't as good as Intel's at optimising their compiler.
      • by AuMatar ( 183847 )
        In addition- GCC is meant to be a cross-compiler. It works for ALpha, Sparc, RISC, x86, and a dozen others. To get there without writing N different compilers, they need to use techniques that are architecture neutral. That means purposely passing up a lot of optimizations that are incompatible with their intermediate representations. So ICC will always be faster, since it only targets x86.
      • Intel is not trying to kill GCC or anything. They try very hard to make ICC compatible with gcc and g++ (ABI and command line interfaces)... so that you can just set CC=icc in your makefile and be on your way.

        It was a big source of pride for them that they got the linux kernel to build in icc without patching. *eye roll*

        But they don't expect every linux user to buy ICC or anything. They position it for use for performance reasons.
      • Re:No and yes (Score:5, Informative)

        by smallfries ( 601545 ) on Tuesday June 05, 2007 @06:31PM (#19404957) Homepage
        Well, no actually you can't. If you've ever spent any time going through the 1000 page Intel Optimisation Guide for the x86 then you would know that they don't spell out all of the trade offs explicitly. They describe enough to point you in the right direction but they keep a lot back. Partially because the behaviour of these chips in certain usage patterns isn't even defined by the design - it's a side-effect of several other parts of the chip design interacting. So the best that you can do is suck it and see - and in general it changes not between major ISA revisions but on individual models.

        Now, if you're Intel then you have the time and the money to work out exactly how to exploit these tradeoffs to schedule threads effectively. But you don't want to give that away for free. From the (very scanty) marketing bullshit that was linked to, it would appear that they've appear an Intel-specific threading library (probably with a POSIX interface). Separate to this is a profiling tool and a multi-threaded debugger (the latter of which is non-trivial). While any debugger will let you skip across threads allowing you do it in a deterministic manner to look for race hazards is much harder.

        The analysis tools sound nice, but the bolton library is nothing special. It's purely to win a few synthetic benchmarks and gain some marketshare for ICC and therefore more "Made for Intel" applications in the market. I'm cynical about the library because what is broken about the threading model in C/C++ would take more than a library to fix. It would require redesigning the language down to the ground and choosing a different set of control constructs.

        So finally, when you claim that it's because Intel has "better" coders. You don't know what you're talking about. I know a few guys who code GCC for a living, and they are grade A coders. It is because Intel has moved the goalposts. It's not so much that GCC targets multiple architectures, it's that they are trying to stick to (relatively) standard C where-as Intel is willing to redefine where the semantic gap sits if they can squeeze out a little more performance. Their attitude is screw portable code - talking across different compiler vendors here, rather than chip vendors. If what they need to squeeze into their compiler is no longer "C" strictly speaking, then they don't care. The gcc guys do.

        Ah yes, and portable code can be a smaller window than you expect. That weighty 1000 page Intel document is sitting comfortable next to the AMD equivalent, which differs in suprising places.
  • by sr. taquito ( 996805 ) on Tuesday June 05, 2007 @02:59PM (#19402289) Homepage
    If compilers keep abstracting away the interface between the programmer and the cpu, programmers will be less likely to write better code or learn new techniques that take advantage of all the power a few extra cores can provide right? That's just my take on it. Then again, I also think learning parallel programming techniques is fun, and a little more academic than most career programmers might like.
    • Re: (Score:2, Insightful)

      by BoChen456 ( 1099463 )

      If compilers keep abstracting away the interface between the programmer and the cpu, programmers will be less likely to write better code or learn new techniques that take advantage of all the power a few extra cores can provide right?

      If compilers keep abstrating away the programmer and the cpu, and getting better at optimization, programmers won't need to write better code or learn new techniques to take advantage of all the power a few extra cores can provide.

      Instead the programmer can concerntrate on writing more understandable code.

      • by suggsjc ( 726146 )
        First, I agree (somewhat).
        I've got a couple of thoughts that I'm not sure how to get out, so just see if can put the pieces together.

        Low-level languages like C are powerful because they can interact (almost) directly with the hardware. Then there are other languages that are built on top of those languages that are designed to hide complexity and allow programmers to code more efficiently at the cost of non-optimized code.
        I didn't RTFA, but if the compilers start taking liberties and "hiding the comple
        • You're making the tacit assumption, here, that software developers will do a better job optimizing their code. The problem is, this hasn't been true for many many years, now. Heck, even memory management is being taken out of the hands of programmers, and the result is more efficient code (yes, believe it or not, studies show that GCs are generally faster than manually allocating and deallocating memory, as the system can do a better job of judging when and how to do it). I suspect the same will be true
        • Believe it or not its impossible to directly talk to hardware like you could in the 386 days. Assembly calls are translated to internal risc ones.

          Modern processors have many features for compilers to take advantage of like pipelines and sse4 which makes the need less of being low level. Compilers do a damn good job and even jit compilers like java and .net can take advantage of modern features. I am imaging hardware virtualization will make these even faster.

          I suppose there will always be a need for low lev
    • Re: (Score:2, Insightful)

      Actually, I've always thought that telling the compiler what you wanted to do, instead of how to do it, would result in the compiler being able to determine the best path to take for a given task.
      Even more so for interpreted/compiled on the fly languages. They can be dynamically compiled to take advantage of whatever hardware is available on each machine, without the developer having to code for it.
    • by Kupek ( 75469 )
      Aside from the auto-vectorizing stuff, most of Intel is advertising does not happen automatically. Instead, they provide abstractions that make it easier to write high performance multithreaded code. But programmers will still have to do the hard stuff, which is figure out how best to parallelize their algorithms, distribute their data, and synchronize their threads.
    • All I have are Dr. Pepper, Twizzlers, and cavities you insensitive clod.
    • In large projects its nice to have abstraction as not all programmers are experienced. Also it makes code easier to read. Trying to find how another programmer is engineering the project or a specific implementation is hard. I like Java for example because combined with javadocs you can easily document code and figure out the hirarchy with the abstraction. But that is just my taste.

      But in career programming its all about time and making deadlines to help make your company more money.

    • If compilers keep abstracting away the interface between the programmer and the cpu, programmers will be less likely to write better code or learn new techniques that take advantage of all the power a few extra cores can provide right?

      The people writing the compilers are "programmers" too. If the compiler programmers make the compiler do more effective optimization when turning platform-agnostic source into platform-specific binaries, that just means that burden of worrying about platform optimization is sh

    • Re: (Score:3, Insightful)

      Almost everybody who can write better assembly than GCC is already working on compilers and optimization. Even GCC is better than most programmer's hand-optimized assembly. I've seen many times over the past several years where open source projects have thrown away assembly source because it is faster and more readable in C. (WINE in particular benchmarked their hand-optimized routines and found themselves soundly beat by GCC.)

      These days, a similar thing is happening with vectorization. If programmers try t
  • by Doctor Memory ( 6336 ) on Tuesday June 05, 2007 @03:01PM (#19402337)
    I was looking at the Thread Building Blocks paper, and it reads like it was somebody's hastily-scribbled draft:

    "The Intel Threading Tools automatically finds correctness and performance issues" (The tools finds?)
    "Along with sufficient task scheduler and generic parallel patterns" (Who has insufficient task scheduler?)
    "automatic debugger of threaded programs which detects many of thread-correctness issues such as data-races, dead-locks, threads stalls" (Sarcasm fails me...)

    And that's just in the first few paragraphs, I haven't even gotten to the real meat of the article!

    I'm used to informative, well-written and reasonably complete technical documentation from Intel — WTF is this?
    • "The Intel Threading Tools automatically finds correctness and performance issues" (The tools finds?)

      No, the "Intel Threading Tools" is a product, in the singular -- it finds. Maybe Intel threading tools would find, but notice the subtle difference?

      "Along with sufficient task scheduler and generic parallel patterns" (Who has insufficient task scheduler?)

      OK, sso it's a bit awkward to parse, but isn't it obvious by the grammar that "sufficient" modifies both "task scheduler patterns" and "generic parallel

      • "automatic debugger of threaded programs which detects many of thread-correctness issues such as data-races, dead-locks, threads stalls" (Sarcasm fails me...)

        Oh wait, nevermind. This sentence shows that the author truly can't write clearly.

        Couldn't you make that sentance pretty normal sounding just by removing that one errant 'of', i.e.

        "[The software includes an] automatic debugger of threaded programs, which detects many thread-correctness issues such as data-races, dead-locks, threads stalls [...]"

        Altho

    • Re: (Score:3, Informative)

      by presearch ( 214913 ) *
      The Intel Compiler Lab is based in two Russian cities - Moscow and Novosibirsk.
      Probably the source of the less than optimal text.

      How's the documentation on -your- compiler coming along?

      • The Intel Compiler Lab is based in two Russian cities - Moscow and Novosibirsk.
        Probably the source of the less than optimal text.

        The point is, whatever tortured, twisted prose was submitted should have been edited and polished before going out with an Intel logo on it. This was a white paper on the corporate web site, not a post on some random Intel engineer's blog — different standards apply.

        Seriously, check out this opening paragraph from the Intel® 64 and IA-32 Architectures Application Note:
        TLBs, Paging-Structure Caches, and Their Invalidation

        The Intel® 64 and IA-32 architectures may accelerate the address-translation process by
        caching on the processor data from the structures in memory that control that process.
        Because the processor does not ensure that the data that it caches are always consistent
        with the structures in memory, it is important for software developers to understand how
        and when the processor may cache such data. They should also understand what actions
        software can take to remove cached data that may be inconsistent and when it should do
        so. The purpose of this application note is to provide software developers information about
        the relevant processor operation. This application note does not comprehend task switches
        and VMX transitions.

        Notice how they even get the fact that "data" is plural right? That's the

      • by geekoid ( 135745 )
        A large company develops and release an profession compiler, decent documentation is a reasonable expectation. To say snide comments does not help, and shows that you have no real argument.

        Grow up.
        • develops and release an profession compiler

          Too easy, moving on....

          To say snide comments does not help, and shows that you have no real argument.

          Um, I'm not arguing. I'm making an observation. If you diagree with me, then you're making the argument. Which is fine, just so we know where we stand. Nice non-sequitur, though.

          Grow up.

          So expressing dismay that a respected corporation is showing less-than-professional work is a sign of immaturity? Buy a vowel and solve the puzzle, honey, Real World moves [jargon.net] and all...

          • Hi, Doctor Memory.

            In a post below this, I defended your original position to another poster (not that you asked me to), but here I have to say something to you. I think you're incorrectly wailing on geekoid here; I believe he is commenting on presearch's comments, not yours (check the indents). In other words, he was coming to your aid, in a way.

            Take it easy.
            • But, uh....ummmmm.....

              *sigh* I am such a dickhead...

              Geekoid, I apologize. Your comment does make sense, once I pull my head our far enough that I can tell what part of the thread you're on. I owe you a $BEVERAGE. Feel free to flame me back, I deserve it.

              And thanks, stuktongue, I obviously shouldn't be counted on to figure this kind of stuff out on my own...

              (Damnit! I hate being a fucktard!)
              • LOL, dude. Now you don't have to be that hard on yourself. :-)

                It is nice to see someone who's capable of apology, though. You don't see much humility on Slashdot, or the Internet, these days. Like the Rev. Rodney King once said, "Can't we all just get a lawn?" :-)

                I do like your use of "fucktard"... one of my favourite words. Though I prefer to use it on others. :-)

                Take it easy, man.

                P.S.: I visited Intel's web site and struggled through their writeup on the threading tools. As a potential customer and user o
    • WTF is this?


      WTF? No, this is SPARTA!
    • Wrong. Parse the grammar the way Intel does it not only makes sense but is very meaningful. What you're lacking are some font and other cues about what things are compound word nouns as well as some knowledge about what are legitimate threading concepts.

      "The Intel Threading Tools automatically finds correctness and performance issues" (The tools finds?)
      Try italizing the product name: The Intel Threading Tools(product name) automatically finds correctness and performance issues
      and it looks much better grama
      • "Please mod the writer down."

        Please do not mod the writer down.

        "All he/she is informing us of is his/her ignorance. There's nothing wrong with being unaware of threading concepts, but please don't suggest there's something wrong with things you just don't understand."

        I think you're missing the point Doctor Memory is making. The sense I'm getting is that he isn't criticizing the technical correctness so much as the quality of the writing. He has come to expect decent writing from Intel (as have I, for that
      • the product name is singular, not plural (despite the last word ending in s)

        Then how do you explain the sentence "Intel® Threading Tools consist of the following:"? "Tools" is plural, and it doesn't matter how many adjectives you throw in front of it. The usual convention to make it singular is to suffix the name with a singular qualifier, like "suite" or "collection". Substituting form for content, as you suggest, isn't going to fix it.

        Look at it as as function: sufficient( task scheduler and generic parallel patterns ).

        It's not a function, it's a poorly-written sentence. And if you knew half as much about threading and synchronization as you pretend to,

  • OK, I'll Byte (Score:3, Interesting)

    by Skjellifetti ( 561341 ) on Tuesday June 05, 2007 @03:06PM (#19402409) Journal
    As the company raises the core count with each generation of new products, it will get harder and harder for programmers to manage the complexity associated with all of that available parallelism.

    As a programmer, I already have abstractions such as Active Objects [wustl.edu]. While this may make it easier for compiler writers or kernel hackers, what benefits does it bring to us ordinary mortals?
  • by R2.0 ( 532027 ) on Tuesday June 05, 2007 @03:06PM (#19402411)
    Cue "Fortran is Dead" comments in

    30
    20
    10
  • intel's product page (Score:4, Informative)

    by non ( 130182 ) on Tuesday June 05, 2007 @03:07PM (#19402431) Homepage Journal
    the intel product has somewhat more detail. it can be found here [intel.com].
  • by JustNiz ( 692889 ) on Tuesday June 05, 2007 @04:07PM (#19403387)
    >> As the company raises the core count with each generation of new products, it will get harder and harder for programmers to manage the complexity associated with all of that available parallelism.

    I'm very surprised and dissapointed by the pervasiveness of the incorrect myth thats being promoted even amongst supposedly technically knowledgeable groups that:
    a) Writing multithreaded code is terribly difficult
    b) You need to implement code to have the same number of threads as your target hardware has cores
    Both of these is completely not true at least for the PC marchitecture.

    The way to develop multithreaded code is to exploit the natural parallelism of the problem itself. If the problem decomposes down most neatly into one, three or 6789 threads, then design and write the implementation that way. Consequently the complexity of the problem does not increase as the number of cores available increases.

    In the PC architecture case, attempting to design your code based on the number of cores in your target hardware just leads to a twisted and therefore bad and also non-portable design.

    I'm surprised how few developers seem to understand that in fact its OK, normal and often desireable to have more than one application thread running on the same core. In fact you really can't even ensure or even assume that your multi-threaded app will get one core per thread even if the hardware has enough cores, or work best if it does, as core/thread allocation is dynamically scheduled by the OS depending on loading. Not to mention there's all sorts of other apps, drivers and operating system tasks running concurrently too, so depending on each core's load, one app-thread per core may actually not be the most optimal approach anyway.
    • by vidarh ( 309115 )
      The problem is that if you have a problem that decompose neatly into four parts, and you want to be able to take advantage of new systems with far more cores, the amount of work you may need to do to decompose the problem further may be orders of magnitude more complex than getting it to decompose into the original four parts. The problem isn't when it decomposes naturally into more parts than you have cores, but when it decomposes into fewer parts.

      Developers that fail to handle that will be unable to com

    • by mandolin ( 7248 )
      In the PC architecture case, attempting to design your code based on the number of cores in your target hardware just leads to a twisted and therefore bad and also non-portable design

      Additionally, at least for "embarrassingly parallel" problems, it is easy enough to get the number of online processors at runtime, and (slightly harder) make the program use that information to decide how many worker threads to use.

    • Re: (Score:3, Insightful)

      by ratboy666 ( 104074 )
      A couple of points:

      1 - If the communication or thread switching overhead exceeds the thread computation, it is not worth threading.

      2 - It is (unfortunately) easy to build in "lock stepping" into otherwise independent threads. These systems scale from 1..n cores; after n cores no further scaling is seen.

      3 - It *is* difficult to build correct parallel systems. Especially with points 1 and 2 in mind (and, yes, I *have* built parallel high-speed device drivers that are lock-free to avoid switching).

      4 - *Proving
  • by wazzzup ( 172351 ) <astromac.fastmail@fm> on Tuesday June 05, 2007 @04:18PM (#19403551)
    I know OS X is compiled using GCC but I wonder if Apple would see performance gains by using it? If they did, would it somehow introduce problems? Basically, I'm wondering if there would be a downside to using the Intel optimized compilers as opposed to all-purpose GCC compiler.

    As an aside, Linux is obviously compiled using GCC but I wonder if Microsoft compiles Windows using the Intel compilers?
  • Intel clearly think that it is important to offer a compiler(s) specifically for their chips, and I see that as a good thing for users of Intel based systems who want to get the most out of there hardware, so my question is why AMD do not make compiler software for their chips. They do after all have their own set of special extensions so would they not benefit from creating a compiler armed with there own "inside knowledge"?

    Seems odd to me to make the chips but not the software to allow people to fully uti

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...