Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD

AMD Unveils SSE5 Instruction Set 85

mestlick writes "Today AMD unveiled its 128-Bit SSE5 Instruction Set. The big news is that it includes 3 operand instructions such as floating point and integer fused multiply add and permute. AMD posted a press release and a PDF describing the new instructions."
This discussion has been archived. No new comments can be posted.

AMD Unveils SSE5 Instruction Set

Comments Filter:
  • Who cares... (Score:1, Insightful)

    by aquaepulse ( 990849 )
    in 2009 I'll be holding out for SSE8 anyway.
    • Just from a brief overview of AMD's releases, there seems to be some voodoo built in for combining iterative operations into a single execution. Of course, most things from AMD have limited meaning until they have chips in developers' hands. But, this has the potential to offer more efficient processing.
  • by Harik ( 4023 ) <Harik@chaos.ao.net> on Friday August 31, 2007 @12:39AM (#20421153)
    So, where's the analysis by people who write optimized media encoders/decoders? How useful are these new instructions, or are they just toys? How well did they handle context switching? What's the CX overhead? Is there a penalty for all processes, or only when you are switching to/from a SSE5 process? Will this be safely usable under all operating systems, or will they need a patch?
    • by theGreater ( 596196 ) on Friday August 31, 2007 @12:58AM (#20421261) Homepage
      It ROUNDSS! It ROUNDSS us! It FRCZSS! Nasty AMD added to it.
      • by Pojut ( 1027544 )
        You owe me a cup of coffee and a new keyboard.
      • Re: (Score:3, Funny)

        Nasty AMD added to it.

        The better question is how the fuck did AMD get to write the next iteration of an Intel technology. Shouldn't it be AMD 3DNow!^2? This is like Apple deciding their next HFS filesystem will be versioned NTFS 7.0.

        They can battle back and forth with version numbers and see who is first to get to 11, the version number where, for whatever reason, developers are forced to come up with a new versioning scheme. That will throw a wrench in the works. Take that Intel!

    • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Friday August 31, 2007 @01:12AM (#20421335) Homepage

      I don't write those fancy codecs, but I can immediately see where some of these instructions could come in handy - for instance, PCMOV and PTEST (packed cmov/test).

      The new instructions take up an extra opcode byte, but seeing how they will lower the amount of instructions you would otherwise do, I don't see that as a problem. The super instructions (like FMADDPS - Multiply and Add Packed Single-Precision Floating-Point) do more than just help the instruction decoder too - they mention "infinitely precise" intermediate voodoo for several of them which makes it seem like doing a FMADDPS instead of a MULPS,ADDPS will result in a more accurate result.

      There are new 16-bit floating point instructions too, which I can see as a boon for graphics wanting the ease of floating point and a little higher rounding precision than bytes with values between 0 and 255 would give, without the large memory requirements of 32-bit floating point.

      • by Arimus ( 198136 )
        Being thick (and out of coffee) how the hell can any thing be infinitely precise? Or atleast while it can be infinitely precise how do you go about checking it... might take a while to prove it for all possible numbers (of which there is an infinite amount of, and for each one you would have to check it to an infinite number of decimal places).

        One of my pet peeves is statements like infinite precise :)
        • A very quick Google search for "infinite precise" yielded this.

          What I think you meant was, "How can the infinitely precise number be stored and accessed by a computer?" Well, that's not the same thing.
        • by CryoPenguin ( 242131 ) on Friday August 31, 2007 @07:01AM (#20422979)

          Being thick (and out of coffee) how the hell can any thing be infinitely precise?

          The result will still eventually be stored back into a floating-point number. What it means for an intermediate computation to be infinitely precise is just that it doesn't discard any information that wouldn't inherently be discarded by rounding the end result.
          When you multiply two finite numbers, the result has only as many bits as the combined inputs. So it's quite possible for a computer to keep all of those bits, then perform the addition with that full precision, and then chop it back to 32bits. As opposed to implementing the same operation with current instructions, which would be: multiply, (round), add, (round).
        • Re: (Score:3, Informative)

          by gnasher719 ( 869701 )
          >> Being thick (and out of coffee) how the hell can any thing be infinitely precise? Or atleast while it can be infinitely precise how do you go about checking it... might take a while to prove it for all possible numbers (of which there is an infinite amount of, and for each one you would have to check it to an infinite number of decimal places).

          I'll give you an example. Lets say we are working with four decimal digits instead of 53 binary digits, which is what standard double precision uses. Any op
        • Re: (Score:3, Informative)

          by arodland ( 127775 )
          The important word there is intermediate. You don't get a result of infinite precision, you get a 32-bit result (since the parent mentioned single-precision floating point). But it carries the right number of bits internally, and uses the right algorithms, so that the result is as if the processor did the multiply and add at infinite precision, and then rounded the result to the nearest 32-bit float. Which is better than the result you would get by multiplying two 32-bit floats into a 32-bit float, then add
          • by Arimus ( 198136 )
            It did actually make sense before my post.... it is just not infinitely precise in a pure sense of the word infinite. I was being somewhat hmmm... me - my (oh god, what's the word I'm looking for - kind of style) does not come across well sometimes in textual form.

    • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Friday August 31, 2007 @01:29AM (#20421449) Journal

      Read this interview with Dr Dobbs [ddj.com]:

      A floating-point matrix multiply using the new SSE5 extensions is 30 percent faster than a similar algorithm

      I believe this helps gaming and other simulations.

      Discrete Cosine Transformations (DCT), which are a basic building block for encoders, get a 20 percent performance improvement

      And then we have the "holy shit" moment:

      For example, the Advanced Encryption Standard (AES) algorithm gets a factor of 5 performance improvement by using the new SSE5 extension

      If I get one of these CPUs, I'll almost certainly be encrypting my hard drives. It was already fast enough, but now...

      As for existing OS support, it looks promising:

      We're also working closely with the tool community to enable developer adoption -- PGI is on board, updates to the GCC compiler will be available this week, and AMD Code Analyst Performance Analyzer, AMD Performance Library, AMD Core Math Library and AMD SimNow (system emulator) are all updated with SSE5 support.

      So, if you're really curious, you can download SimNow and emulate an SSE5 CPU, try to boot your favorite OS... even though they say they're not planning to ship the silicon for another two years. Given that they say the GCC patches will be out in a week, I imagine two years is plenty of time to get everything rock solid on the software end.

      • by gnasher719 ( 869701 ) on Friday August 31, 2007 @08:48AM (#20423817)
        >> And then we have the "holy shit" moment:

        For example, the Advanced Encryption Standard (AES) algorithm gets a factor of 5 performance improvement by using the new SSE5 extension
        If I get one of these CPUs, I'll almost certainly be encrypting my hard drives. It was already fast enough, but now...

        They copied two important features from the PowerPC instruction set: Fused multiply-add (calculate +/- x*y +/- z in one instruction), and the Altivec vector permute instruction, which can among other things rearrange 16 bytes in an arbitrary way. The latter should be really nice for AES, because it does a lot of rearranging 4x4 byte matrices (if I remember correctly).

      • For example, the Advanced Encryption Standard (AES) algorithm gets a factor of 5 performance improvement by using the new SSE5 extension


        Any idea how this stacks up against VIAs Padlock?
      • I've just paged through the spec PDF, and I can't work out for the life of me how these instructions help you implement AES. In normal implementations AES does sixteen byte-to-word table lookups per round and these lookups take nearly all the time; they also open up a host of vulnerabilities in side channel attacks. To avoid these lookups you have to have a way of doing the GF(2^8) arithmetic directly, and I can't see any way these instructions will help.

        Anyone got any guesses? Someone who understands Ma
    • Context switching doesn't apply. There's no such thing as an SSE5 process. All non-privileged instructions on the CPU are available to the processes that run on it. The OS swaps out the full state of the CPU when switching context, so it swaps those SSE registers out as well. Therefore, the OS must know what registers to swap out, but since these instructions appear to work on the same ol' SSE/SSE2 registers, a relatively recent OS should have no problem supporting applications that use them.
      • by Harik ( 4023 )
        That's what I was asking, thanks. I missed that it hadn't added any new SSE registers. Don't be so quick on the "No such thing as a SSE5 process" though - there IS such a thing as a FPU process, because of an ancient design decision from intel that had the FPU as a coprocessor. That's stuck with us right to the point of 64bit processors - and they still have to emulate it in 32bit mode.
        • Could you clarify this? The only thing that I'm aware of is that as part of the MMX instruction set, if you use MM registers you need to clear them (EMMS instruction) before you can use the FPU.

    • This isn't adding new registers. It doesn't have the MMX defect. It's just more SSE stuff.
  • APL (Score:4, Funny)

    by Citizen of Earth ( 569446 ) on Friday August 31, 2007 @12:39AM (#20421155)

    instructions such as floating point and integer fused multiply add and permute

    So machine languages are APL-compatible these days.

    • Re: (Score:2, Interesting)

      by Ilyon ( 1150115 )
      I would say APL has always been compatible with the various vector/parallel machine languages. With the general but precise nature of APL expression, it should be easy to generically and efficiently parallelize/vectorize any APL interpreter for any machine architecture. Is there much activity in marketing of current APL products? It seems like IBM is doing nothing more than supporting existing customers. Jim Brown and company established SmartArrays, which caters a specific C APL library to specific cus
  • Can one of the cryptographers on slashdot comment on weather this is useful to them or not?

    (yes, I am paranoid... why do you ask? are you with the CIA?)
    • Re: (Score:2, Funny)

      by rts008 ( 812749 )
      "...weather this is useful to them..."

      The weather (www.weather.com) is dependant on where you live and what specific time frame you are inquiring about, subject to the meteorologists report for that time frame and area.

      ??!!Whether???!! Hmmm... that's a whole different subject, but as I am with the CIA, why do you ask? Are you paranoid or something?

      ***Hmmm...jimXugle (921609)....posted....logging on server....LOGGED!

      What was your question? We are from the government, we can help, honest!
    • Actually its the FBI you would need to be concerned about, as they gather information about US citizens, whereas the CIA gathers foreign intelligence.
      • Re: (Score:3, Insightful)

        by MrNaz ( 730548 )
        Great! I'm glad those two organizations have such a long and distinguished history of self-restraint when it comes to the borders of their mandated spheres of operation.
    • Re: (Score:3, Interesting)

      by gnasher719 ( 869701 )
      '' Can one of the cryptographers on slashdot comment on weather this is useful to them or not? ''

      One useful addition (copied from Altivec) is the vector permute instruction. What is clever about it in terms of cryptography is that you can translate a vector using a 256 byte translation table _without doing any memory access_ by using the vector permute instruction in a clever way. Now the execution time is completely data-independent, so one important attack vector is closed.
  • Can someone explain how a 64bit processor can run 128 bit instructions, or what this actually means? Thanks
    • Separate registers:

      http://en.wikipedia.org/wiki/SIMD [wikipedia.org]
    • Re: (Score:3, Informative)

      by NeuralAbyss ( 12335 )
      The 64-bit designation refers to the width of the address bus*. For example, IA-32 processors have been able to handle 64 bit integers for ages.. so a 64-bit address-capable processor handling 128 bit numbers is nothing new.

      * Yes, PAE was a slight deviation from a 32 bit address space, but in userspace, it's 32 bit flat memory.
      • by GroovBird ( 209391 ) * on Friday August 31, 2007 @02:22AM (#20421691) Homepage Journal
        I believe the 64-bit designation refers to the width of the general purpose registers. This usually correlates to the address space used, but has nothing to do with the address bus. The 8086, for example, while being a 16-bit processor had a 20-bit address bus. The 8088 was a 16-bit processor, but only had an 8-bit data bus to save costs. Both were 16-bit processors, because the general purpose registers (AX, BX, CX, DX) were 16-bit.

        In the x64 world, the general purpose registers are 64-bit wide. This also used to influence the width of the 'int' datatype in the C compiler, although I'm not sure that 'int' is a 64-bit integer when compiling x64 code.
        • by Wyzard ( 110714 )

          That means my twelve-year-old HP48 calculator has a 64-bit processor, despite having a 4-bit bus and 20-bit addresses. :-)

        • I believe the 64-bit designation refers to the width of the general purpose registers. This usually correlates to the address space used, but has nothing to do with the address bus. The 8086, for example, while being a 16-bit processor had a 20-bit address bus. The 8088 was a 16-bit processor, but only had an 8-bit data bus to save costs.
          Are you implying that the Sega Genesis was 32-bit long before the 3DO and PlayStation?
          • Well, that certainly is a good question. I'm not trying to imply anything, but the Wikipedia article on the matter clearly states that the Motorola 68000 is a 16-bit architecture even though its general purpose registers, and basic arithmetic functions are 32-bit, simply because it has a 16-bit data bus.

            It also states here [wikipedia.org] that a 16-bit architecture is one with a 16-bit data bus, address bus or register size. Perhaps the Motorola 68000 was never advertized as a 32-bit machine, because that sort of marketing
            • the Wikipedia article on the matter clearly states that the Motorola 68000 is a 16-bit architecture even though its general purpose registers, and basic arithmetic functions are 32-bit, simply because it has a 16-bit data bus.

              It also states here [wikipedia.org] that a 16-bit architecture is one with a 16-bit data bus, address bus or register size.

              Wouldn't that make the Super NES an 8-bit system? Its 65C816 CPU had 16-bit registers and an 8-bit data bus. And was the Nintendo 64 an 8-bit system because it used 8-bit RDRAM at a comparatively high clock rate for the time [wikipedia.org]?

              Perhaps the Motorola 68000 was never advertized as a 32-bit machine, because that sort of marketing ploy was not exercised at the time?

              Believe me, bit counts were the marketing ploy of the time.

            • by be-fan ( 61476 )
              The 68k is for all intents and purposes a 32-bit machine. It had a 32-bit native word and 32-bit addresses.

              Current "64-bit" CPUs have 128 bit memory busses -- that doesn't make them 128-bit.
          • Re: (Score:2, Informative)

            by Jagetwo ( 1133103 )
            Motorola 6800x, 68010 are 16-bit designs, that is, 16-bit processors with 32-bit register file. Whenever you used 32-bit operands on those CPUs, they were slower, because it was really executing them in 16-bit parts. Bus was also 16-bits wide, but with 24 address lines. It was just a forward-thinking design hiding 16-bitness.
          • by ravyne ( 858869 )
            As we've read here, bit designations have no real broadly-accepted definition and its more a matter of what marketing slaps on the chip.

            The 68000 is a chip capable of performing 32bit arithmetic, but only able to load 16 bits at a time, therefore, it was most efficient to rely on 16bit values when possible (even though the extra 16 bits allowed you to do some neat tricks.) Later revisions of the 68000 exposed the entire 32bit data bus without changing the general architecture of the core. Those are clearly
          • No, you're getting confused by blast processing.
        • This also used to influence the width of the 'int' datatype in the C compiler, although I'm not sure that 'int' is a 64-bit integer when compiling x64 code.
          With GCC an int is 4 bytes (32-bit) and a long is 8 bytes (64-bit).
      • by forkazoo ( 138186 ) <wrosecrans AT gmail DOT com> on Friday August 31, 2007 @04:27AM (#20422277) Homepage

        The 64-bit designation refers to the width of the address bus*. For example, IA-32 processors have been able to handle 64 bit integers for ages.. so a 64-bit address-capable processor handling 128 bit numbers is nothing new.


        Technically, the "bit designation" of a platform is defined as the largest number on the spec sheet which marketing is convinced customers will accept as truthful. Seriously, over the years different processors and systems have been "16 bit" or "32 bit" for any number of odd and wacky reasons. for example, the Atari Jaguar was widely touted as a 64 bit platform, and the control processor was a Motorola 68000. The Sega Genesis also had a 68k in it, and was a 16 bit platform. The thing is, Atari's marketing folks decided that since the graphics processor worked in 64 bit chunks, they could sell the system as a 64 bt platform. C'est la vie. It's an issue that doesn't just crop up in video game consoles -- I just find the Jaguar a particularly amusing example.

        But, yeah, having a CPU sold as one "bitness" and being able to work with a larger data size than the bitness is not unusual. The physical address bus width is indeed one common designator of bitness, just as you say. Another is the internal single address width, or the total segmented address width. Also, the size of a GPR is popular. On many platforms, some or all of those are the same number, which simplifies things.

        An Athlon64, for example, has 64 bit GPR's, and in theory a 64 bit address space, but it actually only cares about 48 bits of address space, and only 40 of those bits can actual be addressed by current implimentations.

        A 32 it Intel Xeon has 32 bit GPR's, but an 80 bit floating point unit, the ability to do 128 bit SSE computations, 32 bit individual addresses, and IIRC a 36 bit segmented physical address space. but, Intel's marketing knew that customers wouldn't believe it if they called it anything but 32 bit since it could only address 32 bits in a single chunk. (And, they didn't want it to compete with IA64!)
        • Tom, Jerry, and IOP (Score:3, Informative)

          by tepples ( 727027 )

          for example, the Atari Jaguar was widely touted as a 64 bit platform, and the control processor was a Motorola 68000.

          The Jaguar had a 64-bit data bus, a 32-bit CPU "Tom" connected to the GPU, a 32-bit CPU "Jerry" connected to the sound chip, and a 32-bit MC68000 with a 16-bit connection to the data bus, used as an I/O processor (in much the same way that the PS2 uses the PS1 CPU). Some games ran their game logic on "Tom"; others (presumably those developed by programmers hired away from Genesis or Neo-Geo shops) ran it on the IOP. Pretty much only graphics operations ever used the full width of the data bus.

      • '' The 64-bit designation refers to the width of the address bus*. ''

        Please show us any example of a processor with a 64 bit address bus. I don't think there are any in existence.

        What you mean is the width of logical addresses, which is something completely different.
    • There are different types of registers on any modern cpu. For example, general purpose registers, floating point registers and SIMD (Single Instruction, Multiple Data) registers to name a few. The first two types on a 64 bit CPU are 64 bit registers while SIMD registers are 128 bit.

      Here [umd.edu] is a brief description of what SIMD is and what it can be used for:

      Single Instruction, Multiple Data (SIMD) processors are also known as short vector processors. They enable a single instruction to process multiple pieces o

  • by WoTG ( 610710 ) on Friday August 31, 2007 @01:44AM (#20421533) Homepage Journal
    I'm not really qualified to make an opinion on this, but my guess is that these instructions will prove increasingly useful as AMD integrates the GPU and CPU. To me, it looks like they plan to make accessing what was traditionally part of the GPU a simple process (relative to accessing a GPU directly through their own pseudo CPU api's).

    It'll take a couple years for "SSE5" to show up in AMD chips... which happens to coincide nicely with their Fusion (combined CPU+GPU) product line plans.

    Will Intel pick up on these instructions? Maybe not. Does that mean they die? No, the performance benefits for those areas where this will make the most difference will make it worthwhile. At the very least, AMD can sponsor patches to the most popular bits of OSS to earn a few PR points (and benchmark points).
    • Not sure I'm following how these denser/more efficient instructions would result in better access to a GPU. Certainly the applications of such instructions (specifically matrices) are something that GPU's handle well, but how would this improve CPU/GPU collaboration? If anything it just gives the CPU jobs which would have had to be hacked into a GPU...
      • Re: (Score:3, Interesting)

        by WoTG ( 610710 )
        My thought was that the long term plan is to integrate the GPU anyway (for one product line at least). While the GPU is RIGHT THERE, they will find a way to use of much of it as they can when it's not busy with 3D work... which for the average office environment is 95% of the time.

        Gamers can still buy addon graphics cards, of course.
    • If they take anything close to the same attitude with their GPUs as they just did with their new CPU instruction set, that would mean we'd finally have a reasonably fast GPU with a completely open software stack.

      As it is, ATI/AMD is maybe less proprietary than nVidia, but their Linux support sucks. Intel, however, typically has very good support, even though it's entirely open drivers, and apparently not sponsored much by Intel itself.
    • '' I'm not really qualified to make an opinion on this, but my guess is that these instructions will prove increasingly useful as AMD integrates the GPU and CPU. To me, it looks like they plan to make accessing what was traditionally part of the GPU a simple process (relative to accessing a GPU directly through their own pseudo CPU api's). ''

      I can't see that at all. Mostly they have been copying stuff that was present on PowerPC CPUs for ages, filled some obvious gaps in the SSE instruction set, and added s
  • But what I've been looking for, and am amazed that I can't seem to find, is a complete collection of all of the instructions for a current (or any recent) AMD processor. Yea, there are lots of documents that break out a small specialized subset of the instruction set, like this PDF. But without a full instructionset reference it doesn't do me much good. One would think that important information like this would be easy to lay one's hands on, particularly in the information age and when the information in qu
    • But what I've been looking for, and am amazed that I can't seem to find, is a complete collection of all of the instructions for a current (or any recent) AMD processor. Yea, there are lots of documents that break out a small specialized subset of the instruction set, like this PDF. But without a full instructionset reference it doesn't do me much good. One would think that important information like this would be easy to lay one's hands on, particularly in the information age and when the information in qu

  • by renoX ( 11677 ) on Friday August 31, 2007 @04:45AM (#20422349)
    For 'serious' scientific computing, they use 64b FP number, having vectors of 4 element seems the right size, so SIMD computations of 4*64=256 seems the 'right size' for these users.

    Sure multimedia & games use lower precision FP computations so 16b or 32b FP number is enough, but it's strange that AMD doesn't try to improve the usage for the scientific computation niche.

    Maybe it's because the change would be expensive as to be efficient, the width of the memory bus should be expanded to 256b from 128b now.

    • You're talking about a very tiny niche of users. The money is in selling consumer and gaming products. Adding all those transistors and bus lines to satisfy a small minority of users doesn't make much business sense. AMD wants to grab the overall performance crown back, not be further pidgeonholed into niche markets.
    • by LWATCDR ( 28044 )
      That level of performance will probably be restricted to the new GPU based accelerator cards coming from nVidia and ATI/AMD. You may see it come to mainstream cpus when Intel and AMD merge the CPU and GPU. Since those will probably be used in Notebooks first you should see them in blades pretty quickly as well. What else would you use a GPU core on a blade for but math?
  • ...and 3DNow! was AMD's. Doesn't seem right for AMD to be introducing an SSE variant.
    • You're thinking of MMX
    • by ravyne ( 858869 )
      Intel and AMD have a cross-licensing agreement that was reached as part of a settlement (anti-trust against Intel I believe) to promote cross-compatibility. Basically, the instructions are up for grabs even though each company's implementations are kept secret. One will introduce an enhancement, then the other will integrate it into their core when they can.

      MMX/3DNow! are the early SIMD instructions which used FPU resources to reduce cost and maintain drop-in compatibility with Operating systems (The OS s
  • For those who actually understand real molecular nanotechnology, aka "Drexlerian" nanotechnology, you may understand that one of the real "breakthroughs" comes when you can computationally simulate the function of a 4 to 8 million atom molecular nanoassembler. Because if you can simulate one and prove that it does not violate any laws of physics then one of the classical oppositions to real molecular nanotechnology falls [1]. The argument transitions entirely from "it can't work" (common among people or

    • Wouldn't a theoretical quantum computer be more helpful, since you can evaluate many bit combinations simultaneously?
      • by bradbury ( 33372 )
        Perhaps. While molecular dynamics simulations are inherently "quantum", I have yet to see a paper which proposes how to solve the equations using a quantum compute and Perhaps a chicken and egg situation. Perhaps after multi-Qubit computers are common one will see attempts at having them perform molecular dynamics simulations. Until then, the equations for molecular simulations are reasonably well defined (electrostatic interactions between nuclei surrounded by electron clouds in motion). A non-trivial
    • I think you may already have what you need for the simulation of such a device. Folding at home has been pumping out protein sequences for years-- but especially now that we have GPGPU I would imagine the simulation wouldn't be too difficult.

      As for designing the system that you want to simulate; the thing with microprocessors is that they're very modular. You can create a register, use it 256 or however many times, and there's your cache. Then you build the part that interfaces the rest of the CPU with that

I'd rather just believe that it's done by little elves running around.

Working...