Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Apple Businesses

PowerPC Assembly Language 35

Josh Aas writes: "I've been looking for a way to learn PowerPC assembly language for a while now. My search for books only led to extremely out-of-date publications, and the whole ordeal was generally frustrating. I was amazed at the lack of documentation. Even Motorola and IBM's documentation resources (on the web) were lacking anything of use to me. However, it turns out that Apple provides a pretty good free tutorial on the subject. It's tailored for coding in Mac OS X, but I imagine it would be just as useful in any PowerPC environment. For some reason it includes instructions for the Intel architecture. Perhaps this has to do with the fact that Darwin runs on x86 as well."
This discussion has been archived. No new comments can be posted.

PowerPC Assembly Language

Comments Filter:
  • by buserror ( 115301 ) on Monday November 19, 2001 @11:47AM (#2584835)
    I agree that the Moto doc is quite useful, having a reference on all the opcodes proves indispensable as soon as you need to *write* some code.

    If you just want a few lines of assembly, write a small C function that does roughtly what you want, disassemble it and 'adopt' the generated assembly into your own piece.

    One very useful notation in codewarrior is that you can specify the register to use for parameters, it helps a lot when doing those things:
    long blah(long p1 : __r10, long p2 : __r11) : __r4 {..... }

    For patching stuff, I've found that using CodeWarrior assembly functions is extremely useful too. As well as for getting the hex value for opcodes from the disassembled output, saves a trip to the reference book.
  • Re:x86 instructions (Score:2, Interesting)

    by karlm ( 158591 ) on Monday November 19, 2001 @07:32PM (#2587069) Homepage
    You would have to re-engineer the instruction set in order to add more registers. You would break backward compatability or make a totally new set of really long instructions that use the new registers. Both solutions have no advantage if you're still going to call it x86.

    It's just a really klugy instruction set. Floating point was added after the fact. IIRC, you have to multiply something in EAX by something pointed to by ESX or some such. This means you end up doing lots of needless loads and stores and loads and stores if, for instance, you're dealing with even a small system of polynomial equations.


    Your modern x86 CPUs can do "register aliasing" to try and overcome some of this register thrashing and to allow more parallelism, but aliasing circitry takes up realistate.


    Allowing self-modifying code is also a pain in the ass (PPC simply does not allow it) if you're going to try and run the instruction set non-natively (such as on the P4 core). Self-modifying code will thrash the P4's "translation cache". Memory pages have three permissions (read, write, execute) just like Unix files. However, several of the eight combinations are not allowed by the instruction set. This causes some minor security problems. (I forget which modes cannot be set. You can argue that some of them are useless as the x86 architects did, but there are some non-obvious situations where you might want some of the stranger modes.)


    People also complain that the x86 has too many addressing modes. IMHO, they should design instruction sets to be simple and allow for simple emulation. They know that they'll be emulating it on a more advanced core later down the line or shipping with an emulator in the BIOS or some such, so they might as well make their lives easier. IBM had the right idea with OS/390. Basically they designed the hardware and low level system such that the user never saw any code running on bare metal. The "OS" that the user saw was running in something like VMWare. Linux for S/390 runs in an OS/390 partition (basically a virtual machine) and never on the bare CPU. This has lots of advantages for security, stability, modularity, isolation, and maintainability, as well as ease of migration on to the next generation.



    There are reasons Intel is migrating from x86 on the high-end CPUs. Don't get me wrong, I'm a big AMD fan, but I think sledgehammer is going in the wrong direction. I woould much prefer that AMD made a PPC, SPARC, ARM, or MIPS instruction set cpu with on-die x86 emulation. We've been doing x86 emulation on a RISC core since the original Pentium chip. It wouldn't take much realistate to expose the RISC or VLIW instruction set as well.

  • More simply (Score:2, Interesting)

    by karlm ( 158591 ) on Monday November 19, 2001 @09:13PM (#2587369) Homepage
    More simply, instuction sets are designed based on a set of asumptions. The x86 instruction set was based on a set of assumptions that was true 20 years ago. x86 assumes that memory is expensive and compilers don't optimize well, so it's got lots of nifty little optimized instructions for doing common (and not so common) tasks and uses variable length instructions to make the code more compact.


    RISK instruction sets assume you don't mind having slightly larger code segmets if it means that most of your instructions execute in one clock cycle and you can up your clock rate (this is why the pentium family uses RISC cores). It also assumes you don't want to pay in transistors (and power and heat) for rarely used instructions. It also assumes you have compilers that a good enough so that you don't pay too high a penalty for not having those scan opcodes, having to work with only a couple of addressing modes, etc.


    VLIW takes this one step further. You don't mind increasing your code size because it means that your compiler has lots of time (comparatively) to figure out how to do multiple simultaneous issues in order to maximize your performance per transistor by explicity putting each issue into the verly long instruction. This assumes that you have very smart compilers. Unfortunately, it also means that your instructions are big. If clock multipliers continue to climb, you may see VLIW instruction sets, but decompressors on the die in order to increase the appearent memory bandwidth.


    Note that this progression in instruction sets represents a progressive moving of certain aspects of the CPU somplexity from the CPU core into the compiler. This allows the limited die realestate to be used for more ALUs, more Cache, smarter branch prediction, "SMP on a chip", etc. while continuing to allow the price of CPUs to fall.


    You can do on-chip emulation of the legacy insstruction sets (like on the P4 and even the Itanium), but it's baggage you have to drag arround even when running your native code. Software emulation of the legacy hardware seems more appropriate. If we all switched to some VILW CPUs now, in another 3 years (being pessimistic), Bochs (or another emulator) running on those CPUs would run the legacy code as fast as today's machines, and would run native code much faster.

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...