Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Effect of Using 64-bit Pointers? 164

An anonymous reader queries: "Most 64-bit processors provide a 32-bit mode for compatibility, but 64-bit pointers are becoming essential as systems move beyond 4GB of RAM. Also, the large virtual address space is very useful for several reasons - allowing large files to be memory-mapped, and allowing pages of memory to be remapped without ever requiring the virtual address space to be defragmented. However, 64-bit pointers take up twice as much memory, which immediately affects memory footprint. This is especially an issue on embedded platforms where RAM is at a premium, but even on systems where RAM is plentiful and cheap the extra memory footprint reduces cache performance. Have Slashdot readers done any research into the actual effect of using 64-bit pointers in a 'typical' application? What proportion of a real program's data is actually pointers?"
This discussion has been archived. No new comments can be posted.

Effect of Using 64-bit Pointers?

Comments Filter:
  • easy... (Score:5, Funny)

    by edrugtrader ( 442064 ) on Wednesday January 21, 2004 @09:34PM (#8050852) Homepage
    Have Slashdot readers done any research into the actual effect of using 64-bit pointers in a 'typical' application?

    none whatsoever.

    What proportion of a real program's data is actually pointers?

    none whatsoever.

    oh... i use java.
    • Re:easy... (Score:5, Funny)

      by El ( 94934 ) on Wednesday January 21, 2004 @09:41PM (#8050897)
      Does Java handle datasets larger than 4GBytes, or does it run so slowly that nobody has been able to find out whether or not it handles them? In the underlying implementation, isn't EVERY object actually a pointer?
      • Re:easy... (Score:5, Insightful)

        by gl4ss ( 559668 ) on Wednesday January 21, 2004 @09:53PM (#8050995) Homepage Journal
        every object(which is everything apart from your basic int & etc) is a reference, which pretty much is a pointer with a fancy name. as to handling 4gbytes I really don't see why it couldn't, it's just a matter of the vm supporting it anyways(afaik the design, nor the bytecode, limits it).

        however, can you think of any system where you had objects, sets of data, and they weren't (at least underneath) pointers to memory?

        and as to the original subject one poster already said it best: if you really have the need for that extra effort of going 64bit pointers you will probably have the memory to spare, no? anyways it will only be a problem if if the the pointers are big enough in comparision to what they're pointing to.. in which case you should rethink what you're doing anyways probably if you care a squat about memory footprint.. bringing the embedded devices to the discussion at this point is totally pointless but of course cool sounding and slashdot editor catchy.

        bleh I'm no expert anyways.
        • I believe the parent referred to heap memory rather than pointers -and in this they are most likely correct.

          The second part I agree with..

          Etta nain.
        • every object is a reference

          No, objects and references are completely different things.

          Objects can vary in size, always live on the heap, and are always instances of a concrete (non-abstract) class.

          References always have the same size, can live on the heap or on the stack, and can have any type (class or interface).

          References are the things that point to objects. Every time you deal with an object you do it by way of a reference to the object. But that doesn't mean that objects and references are t

        • as to handling 4gbytes I really don't see why it couldn't, it's just a matter of the vm supporting it anyways(afaik the design, nor the bytecode, limits it).

          Strings are necessarily at most 4GB is length. This is part of the definition of the language. Therefore there are at least *some* objects which are limited to a 4GB size.

          Also integers are 32bits exactly in Java, so all arrays are necessarily limited to having (about) 4 billion entries. Though, of course, each entry may be more more than one byte i

          • Also integers are 32bits exactly in Java, so all arrays are necessarily limited to having (about) 4 billion entries. Though, of course, each entry may be more more than one byte in size.

            Are you sure the total (byte-)size of an array can exceed 4Gb? As I recall, both the reference and returnAddress types are Category 1 (32-bit) in the VM specification, which implies 4Gb is the maximum size of both data and bytecodes.

      • Re:easy... (Score:5, Informative)

        by jstarr ( 164989 ) * on Thursday January 22, 2004 @02:43AM (#8052387)
        Java does not care about memory limits, the JVM does. The stock Sun JVM for x86 machines will address a maximum of 3-4 GiB (dependent on operating system). However, the IBM JVM on an AIX machine has no practical limit and can easily access >16 GiB memory, if available. If a JVM is so designed, there is no reason a Java program can access as much memory as a program written in C.

        I run very large simulations on various platforms, and some of my simulations have to be run on a 64-bit machine because of the memory requirements. Sun's Java forums have several posts asking for various maximum heap (maximum memory accessable) for various platforms and you can find more exact numbers for specific platforms and operating systems there.

        An object is an object, not a pointer. However, objects are accessed through a reference, which in implementation, is typically a pointer.
    • Re:easy... (Score:5, Insightful)

      by Waffle Iron ( 339739 ) on Wednesday January 21, 2004 @11:48PM (#8051847)
      oh... i use java.

      If you'd pay proper attention to Sun's marketing machine, you'd remember that Java uses a just-in-time compiler. What does a compiler do? It turns all of your "object-oriented is the only valid programming paradigm" source code into a big bucket of CPU-specific opcodes, numbers and *pointers*.

      In fact, it will probably have more pointers than the corresponding C or C++ program would have, due to the plethora of tiny objects you're encouraged to spawn. Naturally, the pointer size would match the CPU architecture on which the program is being run and would consume a corresponding number of cache bytes.

    • Re:easy... (Score:3, Informative)

      by Yuioup ( 452151 )
      That's not true!!

      When you create an object in Java, you are, in a sense, creating a pointer. As a matter of fact it's easy to make a linked list or a binary tree with Java, the same way you do in C. Just because it's not explicitly called a pointer doesn't mean it isn't used.

      Ever heard of a NullPointerException [sun.com]?

      "Java doesn't have pointers" is a hype phrase still left over from the Dot Bomb era...
    • I hate to break it to you, but in Java, everything except primitives is accessed via a pointer. Sometimes, even a via a double-pointer (pointer-to-pointer) depending on the VM.
  • Embedded 64-Bit (Score:4, Insightful)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Wednesday January 21, 2004 @09:37PM (#8050866) Homepage
    If you were going to build something that used embedded 64 bit processing, why would you choose a processor with a 64 bit address space? If you need that much address space, then chances are you can handle the extra RAM needed by the pointers, right?

    Is this really a problem in the embedded space?

    • More to the point, if you NEED lots of adresing space, then you will HAVE lots of space to store it, no?
    • Re:Embedded 64-Bit (Score:5, Insightful)

      by PenguinOpus ( 556138 ) on Wednesday January 21, 2004 @10:00PM (#8051062)
      You missed a good point in the original question. Even if you have tons of RAM, cache size is not growing as quickly and you will thrash your data cache far more quickly if all your pointers double in size. I don't know if immediate mode addressing instructions are common for 64bit operands but if they are, it could thrash your icache sooner as well.

      Bandwidth from memory to cache will also be used by these larger pointers.

      OTOH, other than disk controller caches (?), what kind of embedded systems need more than 4GB online simultaneously ?
      • Re:Embedded 64-Bit (Score:5, Informative)

        by Pseudonym ( 62607 ) on Thursday January 22, 2004 @12:23AM (#8052075)
        OTOH, other than disk controller caches (?), what kind of embedded systems need more than 4GB online simultaneously?

        There's a lot of modern medical equipment which can definitely use the 4GB. MRI machines, CT scanners, ultrasound machines ("sonographs" if you prefer the term) and so on do tend to chew up memory. Particularly the first two, because you often need to hold whole voxel sets in memory while you compute a bunch of cross-sections at odd angles.

      • Re:Embedded 64-Bit (Score:2, Interesting)

        by Scherf ( 609224 )
        OTOH, other than disk controller caches (?), what kind of embedded systems need more than 4GB online simultaneously ?

        Some CAD Programms used in Mechanical Engineering (CATIA V5 for example) could use that much. Loading a whole car engine into one of these Programms will exceed 4 GB pretty quickly.
      • Re:Embedded 64-Bit (Score:4, Interesting)

        by j3110 ( 193209 ) <samterrell&gmail,com> on Thursday January 22, 2004 @01:49PM (#8056831) Homepage
        I think the cache arguement is complete BS. It appears it would be true, but not really. Most pointers are created and controlled by the compiler, and they are going to be relative 90% of the time. That's why relative addressing was invented. So, you get an extra 4 bytes stuffed into your cache on the relatively rare occurance that one of your 8 byte pointers are being used. In this 8 byte pointer, I'm just going to assume that you aren't idiot enough to be accessing memory in the same page most of the time. I really think that the page switching of the RAM to access the data at the other end of the pointer is going to be the greatest overhead. Normal cache misses in a 64bit addressing scheme should be exponentially more than a 32bit, if you really needed the 64bit.

        So while you may have a caching problem, I think it's going to be because of accessing more data rather than the 4 bytes extra on some pointers.

        Now if you're using disk based data structures, you better be using 64bit. I could make an exception if you used a 32bit number to address the cluster, then a 16bit number to access the actual data in the cluster, if required. A good DB server would do well to use 32bit cluster numbers to save index size, then scan the loaded cluster for the record. AFAIK, no one has been clever enough to do this, but I'm not privy to the internal structures of a lot of DBMSs. And this would matter a lot, because you could fit much more of the index into memory, and have much less data to read on the drive. Throwing away CPU cycles and memory for more compact disk data is a common practice.
    • Re:Embedded 64-Bit (Score:5, Insightful)

      by Grab ( 126025 ) on Thursday January 22, 2004 @05:52AM (#8053078) Homepage
      I'd say this isn't a problem, will never be a problem, and the person who posted that initial question really doesn't know shit about embedded.

      Embedded devices come in all sorts of varieties from 4-bit to 64-bit, and will do for the foreseeable future. When you're producing X million chips, the software is amortised to basically nothing and the hardware cost becomes the primary concern, so there is no chance that lower-spec chips will ever go away in the future.

      So you're not going to be forced to use a 64-bit chip in your design, just because the chip company has stopped selling the lower spec ones. In the PC business this does happen, because there's no demand for older, lower-spec chips. In the embedded market though, the demand is there and will continue to be there, so the situation has not and will not arise.

      If your target application needs 64-bit processing, you choose a device that does 64-bit processing, and you choose RAM size to suit. If you don't need it, you don't choose it. Simple.

      Someone elsewhere had some questions about internal registers/internal RAM. Well as with all processors, some give you enough registers and some don't. Again, the engineer just has to pick the processor that gives the capabilities they want.

      Grab.
  • by El ( 94934 ) on Wednesday January 21, 2004 @09:38PM (#8050872)
    How many embedded devices are running 64-bit processors now? Offhand, I'd say this is only a problem if you have an embedded device with more than 4 GBytes of memory... in other words, it hardly sounds like a real-world problem for embedded devices. Yes, workstations and servers with 64-bit processors should probably be using 64-bit pointers.
    • In fact, a very small portion of embedded devices are even 16bit, and I can't think of any that are 32bit... What's the point of a 64bit embedded device? RAM is at a premium but you still need >4GB of it?

      Or maybe what I'm used to calling an "embedded" device isn't the same as the submitter's...
      =Smidge=
      • Personally, both types of embedded device I've worked on have been 32 bit. The first was a database engine (think network attached storage^H^H^H^H^H^H^Hdatabase of several Terabyte dataset size), and the second is set top boxes for digital tv. In the case of the first I can immediately see the need for 64bit arithmetic AND addressing. In the case of the set top box I think 32 bits will be fine for a while yet; there is pressure for faster processors, but not for 64 bit arithmetic.
        • Two of the three projects using embedded devices that I wrote the code for used 4 bit processors. The third used an 8 bit processor. There are still many millions of 4 and 8-bit processors being designed into products. You can't even order a mask for said parts (the vendor won't even answer the phone, or often times even provide the emulator and tools) if you're not talking 500K+ quantities.
      • I'm programming on a 32 bit embedded processor right now.

        Of course, we won't be going to a 64 bit chip in the near future, if ever.
    • by MerlynEmrys67 ( 583469 ) on Wednesday January 21, 2004 @09:54PM (#8051002)
      Worked on a Xeon based embedded platform that could have 16 GB of Ram on the system board... You forgot that Intel provides a segmented architecture didn't you ?

      By the way, the limit was from physical slots - 8 and a 2GByte DIMM memory limit, increase either of those and guess what.

      Now each "process" on our box could only address 4 Gbyte of that memory, but that was a completely different question (and in fact limited by the libraries that were used - again a different story)

      I remember these conversations when the 32 bit world came around - what do you mean I have to put 4 bytes into the processor. End result is that the code is a little larger, and a little slower - and Moore's law marches on and we don't even notice

      • by addaon ( 41825 ) <(addaon+slashdot) (at) (gmail.com)> on Wednesday January 21, 2004 @10:48PM (#8051441)
        Many 32 bit platforms, including x86, PowerPC, etc, support 64GB of ram... but only 4GB of address space. Most people want more than 4GB of address space, but don't yet care about more than 4GB of ram.
        • maybe I'm retarded... probably, as a matter of fact. Why would a person want more address space than usable memory?
          • by bobthemonkey13 ( 215219 ) <keegan@[ ]67.org ['xor' in gap]> on Thursday January 22, 2004 @01:17AM (#8052259) Homepage Journal
            Actually, it can go either way:
            • More address space than physical RAM: Swap space, memory-mapped files, shared memory/IPC, or any other use of virtual memory that doesn't map onto physical memory. This is why 64-bit address space is good even for desktop machines that have less than 4GB of RAM.
            • More physical RAM than address space: Ten processes, each using a single 4GB memory space, can consume 40GB of physical RAM. This is how and why you can put more than 4GB of memory in an x86 machine -- the processor maps from (I believe) 36-bit physical addresses to the 32-bit addresses that processes see.
          • by epine ( 68316 ) on Thursday January 22, 2004 @01:25AM (#8052273)
            I hate to confirm your self diagnosis, but I have sad news to bear.

            If you wish to use memory mapped IO to your file system, which has some good technical properties, you need a pointer with an address range *at least* as large as the largest possible file you might need to access, and preferably as large as the largest file system you intend to mount.

            Addressibility and physical storage are somewhat orthogonal. (In theory, there is no difference between theory and practice, in practice there is.)

            On a machine with 10G of memory, there is no reason for a process to use 64-bit pointers if the process doesn't require more than 32 bits of addressibility. If you look at Apache in the standard threading model, every request is managed by a different process. I doubt you need 64-bit pointers for *each* PHP instance, regardless of how much physical memory the machine contains.

            On the other hand, you might be doing some kind of video stream manipulation on a 10GB file using a machine with only 1GB of physical RAM. You would require the use of 64-bit addressibility for this task if you choose the memory mapped IO model.

            So yes, you are retarded, but it could be cured by thinking before you type (the post does mention memory mapped IO). There: ten simple words of advice that should apply to 2^33 members of the slashdot community.
          • The HURD code to access disks uses mmap() calls, so is currently limited on 32 bit architecture to 2GB disks. Every partition has to be less than 2GB, which is a pain in the ass for todays >100GB drives.
    • How many embedded devices are running 64-bit processors now?

      I believe that the 64-bit capable MIPS architecture found it's biggest success in the embedded processor market. From the wikipedia entry [wikipedia.org]:

      In recent years most of technology used in the various MIPS generations has been offered as building-blocks for embedded processor designs. Both 32-bit and 64-bit basic cores are offered, known as the 4K and 5K respectively, and the design itself can be licenced as MIPS32 and MIPS64. These cores can be mixed

    • Are video game consoles considered "embedded" devices? Seems to me they share many of the same characteristics. (Judging by a quick google search [google.com] they are at least often described as embedded.) Several of those are 64-bit or more.... Jaguar, N64, PS/2, Dreamcast, etc.
  • by Green Light ( 32766 ) on Wednesday January 21, 2004 @09:38PM (#8050878) Journal
    However, 64-bit pointers take up twice as much memory, which immediately affects memory footprint. This is especially an issue on embedded platforms where RAM is at a premium

    Huh? On systems where RAM is at a premium, I don't see the point of using or having 64-bit pointers.
    • by Pseudonym ( 62607 ) on Wednesday January 21, 2004 @11:22PM (#8051669)

      The poster named one point: mapping large files.

      Using mmap() for certain kinds of I/O is very, very useful in performance-sensitive applications. Using POSIX I/O (i.e. read(), write() and its relatives) means that your data must go through memory twice: once from disk into the buffer/page cache and then once again into userland. Memory-mapped I/O effectively unifies the two, saving on precious memory and memory bandwidth.

      • If a read() uses a page-aligned buffer, from a page-aligned source, then why wouldn't the OS map a page directly into the application space? (Assuming that the area had not been mmap'd shared). The same optimization can be made on write() calls.

        The same considerations need to be applied to mmap a file, so there should be is no difference.

        In other words, read() and write() with page-alignment constraints should be the same as mmap. The difference is that re-using the same buffer may require an unmap.

        With
        • If a read() uses a page-aligned buffer, from a page-aligned source, then why wouldn't the OS map a page directly into the application space? (Assuming that the area had not been mmap'd shared). The same optimization can be made on write() calls.

          Because the app. doesn't share the data with the OS so if the app. alters the data the OS needs to have setup COW so the data it sees is the same. And it is very rare for applications to use page aligned buffers to read or write, it is also very common to chang

  • Trade-offs (Score:4, Interesting)

    by El ( 94934 ) on Wednesday January 21, 2004 @09:51PM (#8050980)
    There is always potential trade-offs between run speed and memory space. For example, you could always use a single 64-bit pointer, and save all your addresses as 32-bit or even 16-bit offsets from that pointer (requiring pointer arithmetic to access any object). Then you would use less memory, but your code would run faster.
    • Damn, less memory, less bandwidth required, and my code runs faster, hot damn! yes, I realize you meant to say slower ;)
  • Latency (Score:3, Interesting)

    by andrewl6097 ( 633663 ) * on Wednesday January 21, 2004 @09:56PM (#8051018)
    Given that memory access times are bound by latency far more than bandwidth, the effect of loading another four bytes into the register file is most likely insignificantly small. I'm certain that 8-byte register-to-register operations *are* insignificantly small, and it's likely that pointers, given that they are not large but often accessed would be kept in registers. It would depend highly on the particular architecture.
    • Re:Latency (Score:3, Insightful)

      Parent has interesting point, but it doesn't address the cache issue. 64-bit pointers will take twice as much space as 32-bit pointers. In a jump table situation, for instance, a 128-byte cache line (picking a reasonable number) could only hold 16 pointers instead of 32. Of course, as was also mentioned, when you have hardware that is designed to address more than 4 GB of memory, the amount of cache and main memory available is usually scaled up accordingly to deal with it. Bigger processor, bigger cach
  • by Suppafly ( 179830 ) <slashdot@s[ ]afly.net ['upp' in gap]> on Wednesday January 21, 2004 @09:56PM (#8051028)
    Does anyone use 64 bit processors for embedded applications?
  • by swdunlop ( 103066 ) <swdunlop AT gmail DOT com> on Wednesday January 21, 2004 @09:58PM (#8051042) Homepage
    There's an interesting discussion of 64-bit immediate values at the following link: 64 bit immediates in Python [colorstudy.com]

    If we are already using 64 bits for our pointers, a virtual machine has the potential of exploiting a the pointer's larger footprint for other immediate values. I'm not as crazy about using the MSB of the pointer for indicating an immediate as Ian Bicking appears to be, I'd recommend using the LSB since it's easier to bias any object to an even address than halve the potential addressable space.

    Then again, if the potential address space is 2 ** 64, I suppose it's not such a sacrifice.
    • I'm not as crazy about using the MSB of the pointer for indicating an immediate as Ian Bicking appears to be, I'd recommend using the LSB since it's easier to bias any object to an even address than halve the potential addressable space.

      AMD (you know the guys who made x86-64) are NOT fans of these kinds of ideas. If you scribble in undefined places in the pointer, the Opteron/Athlon64 will throw an exception. Pointers in x86-64 are signed extended, so its not trivial to hide stuff in upper bits and then

  • by Anonymous Coward on Wednesday January 21, 2004 @10:02PM (#8051072)
    With modern processors it's not uncommon to require 64-bit or 128-bit memory alignment on data structures to get the best performance. There are even some instructions that *require* such data alignments in order for them to work at all (for example: MMX or SIMD).

    Because of these existing data alignment issues, going from 32-bit to 64-bit pointers may have absolutely no impact on a program's memory usage and cache performance. It is highly likely you're already using 64-bit alignment when you enable the compiler's optmizations.

    Unless you're building massive linked lists of stuff in a scientific / simulation environment this is probably something not worth worrying about. The efficiency and volume of your actual data will still be the biggest waste of space - and it's not like you won't be able to attach more physical memory onto your new system than the old one.

    If it does effect you... you probably already know what you're doing or you've been making very bad assumptions about the size of your variable types.
    • I'm pretty sure that these days, at least on x86, data is aligned to 4-byte boundaries. Some architectures (I'm pretty sure that x86 is this way) require a 4-byte aligned address as the parameter to the 4-byte memory load instruction).
    • Of course a 64 bit pointer is 2x the size of a 32 bit pointer... 32 bit pointers only need 4 byte alignment, and thus pack nicely. So 64 bit pointers will take twice the cache space.

      And... the pointers have to be loaded. It will take more address bits in the instructions to build constants. More cache used.

      It is NOT highly likely that 64-bit alignment is done when optimizing. In fact, that's just wrong.

      Yes, cache performance suffers.
    • Since no one who responded to you believes you, I thought I'd add in.

      Yes, x86 does not require alignment for the vast majority of data accesses, with pretty much the sole exceptions being SIMD instructions. And yes, that will run psychotically slower than aligning the data, which is why the compiler does it. Look into your MS VC++ optimization setting and see if it's using 4 byte or 8 byte alignment of structures by default. My goodness, it's 8 byte alignment, but why you ask? Because doubles need 8 byte a
    • What about if you the STL collections? I would be that there are more linked lists than you might imagine in things like that. This whole thing sort of reminds me of when we went from 16 to 32-bit cpus and code. Microsoft tried to convice people that the 16 bit code in Windows and even Windows9X was a feature becuause it was faster and took up less memory.
  • by HotNeedleOfInquiry ( 598897 ) on Wednesday January 21, 2004 @10:05PM (#8051095)
    Then you whine about using an extra 4 bytes per pointer to address it. Seems to me that the number of pointers relative to the amount of RAM is so small it's not an issue. Correct me if I'm wrong.
    • I think this would actually impact higher level languages like Java and .NET alot more than "normal" C and C++ programmers - heavily OO languages like those tend to create lots and lots of small references and probably have a higher pointer count.

      On the other hand, 64bit pointers make certain tradeoffs less desirable - for example, if you're passing around pointers to structs that are larger than 32 bits but smaller than 64, it's now more efficent to pass by value. Thats a pretty borderline case, though...

  • The biggest problem of using larger pointers is not so much the extre memory used (memory is cheap). The real problem is that you consume cache space much faster so you page at a much higher rate. This can slow down your program by a factor of up to 5x.
    • As I mentioned previously, this can be more than offset by the cost savings you get in using memory-mapped I/O. Using standard POSIX I/O, your data hits memory twice [slashdot.org].

      Oh, and 64-bit CPUs tend to have larger cache lines to cope.


      • Forget about I/O I'm talking about moving code from RAM to Level 1 cache.
        • ...and I'm saying that I/O can easily dominate cache. This is especially true when you consider that copying a few disk pages from one physical memory location to another could easily trash the contents of your L1 cache.

          • Our app was heavily optimized to minimize I/O transfers, so that was not much of a concern once the code was loaded (initial load time was substantially slower, though). Cache effects were significant.
            • Fair enough. I hack a certain high-performance database server for a living. I/O often dominates our applications, so we really care about memory-mapped I/O. As a result, we often find ourselves scrounging address space on "large" databases. Maybe our domain is more sensitive about it than yours is.

      • that's true, but you don't need to mmap() the whole file at once. you can easily emulate a paged version of read() by mmapping regions of the file you're interested and avoid running into any address-space limitations.
        • And thats also true, but it comes at a performance and maintainability cost (the codes more complex and therefore more bug prone, you've got the overhead of maintaing the page file, etc, etc, etc). It's like saying you can emulate a 32bit address space with 16bit pointers, which you can, but it's hardly preferable to having a flat 32bit address space.
    • The cache consumption is no worse than doubled. AMD has doubled the on chip L2 cache in moving from Athlon to Athlon64/Opteron. 'Nuff said?
  • sparc64 (Score:4, Informative)

    by keesh ( 202812 ) * on Wednesday January 21, 2004 @10:59PM (#8051521) Homepage
    With linux on sparc64, typical applications are 30% slower when running in 64bit userland mode as opposed to 32bit userland mode. There are of course exceptions...
    • Re:sparc64 (Score:4, Insightful)

      by Frequanaut ( 135988 ) on Wednesday January 21, 2004 @11:44PM (#8051815)
      wha??? Any linkage to back this up?
    • Alpha (Score:3, Insightful)

      by Tune ( 17738 )

      I concur with your findings. Back in the days I was experiencing a little disconfort with the speed of my Pentium 90 running linux, I decided to buy a Digital Alpha system 266 MHz. Both systems were configured with 64 MB, and both ran Red Hat 5.2.

      Although the Alpha system is obviously superior in number crunching, I noticed it ran out of physical memory on a regular basis where my P90 whould still be happy. Part of the matter it that alpha binaries tended to be much larger, as was the kernel. But I'm also
      • No, this was because Linux/Alpha at that point didn't have an ld.so. Everything was statically compiled, which is where the size increase came from.
  • IA64 programming (Score:5, Informative)

    by Twister002 ( 537605 ) on Wednesday January 21, 2004 @11:32PM (#8051724) Homepage
    Raymond Chens web log [asp.net]. Lately he's been discussing IA 64 programming. I don't pretend to understand 1/2 of what he's talking about but I thought some of the readers here might be interested in what he has to say.

  • My God. The kludge that would not die! I thought we did away with memory models [sybase.com] when we finally got rid of protected mode [ic.ac.uk]. But nooo. People still want to squeeze a few more bits out of their memory systems. Somebody call an exorcist!
  • 2 comments (Score:3, Interesting)

    by wayne606 ( 211893 ) on Thursday January 22, 2004 @01:31AM (#8052282)
    First of all there is no such thing as a typical program... If you are writing a lisp interpreter where everything is a pointer then you may see your memory usage almost double. If you have a numerical program that is dominated by huge arrays of floats you might not see any difference at all.

    Second, here is a trick I have seen - it seems a bit strange but works well if you encapsulate your data well. Keep in mind that objects are generally aligned to a 8-byte boundary (if they are malloc'ed). That means your low 3 bits are not used at all. If your objects have, say, 64 bytes of data in them (possibly after a bit of padding) then you are wasting 6 bits. Just store your pointers as 32-bit words, shifted over by 6 bits. When you want to dereference them, your get-the-pointer accessor function just shifts them back and gives you a 64-bit pointer.

    Now you have an effective address space of 256GB and your data size has not grown at all. Maybe you have taken a hit in performance but until you benchmark you never know...
  • And segments...? (Score:2, Informative)

    by StarBar ( 549337 )
    On CPU:s with segments the impact must be much less if even at all. Say for instance that you reside in a 32 bit segment X and 16 bit subsegment Y then you would use 16 bit storage of pointers in RAM even though the CPU constructs the full 64 bit pointer internally by concatenating all the parts from the segment registers with the 16 bit from RAM.

    I don't assume any CPU in particular just the principle of segments.
  • Answer: yes (Score:3, Interesting)

    by p3d0 ( 42270 ) on Thursday January 22, 2004 @10:11AM (#8054231)
    A while back, I was looking into more efficient heap storage of Java objects, and found that the heap of a variety of Java programs consist of about half pointers and half ints. The next most common type was booleans, and they were under 1%. Everything else was vanishingly small.

    Thus, you can expect Java heaps to expand by about 50% when moving from 32-bit to 64-bit pointers. What effect this has on your program's performance depends on the relation between the program's resident sets and the machine's cache. For instance, if your program has a resident set of 200KB on a machine with a 256KB cache, then the extra 50% will blow the cache and kill your performance. If the resident set were 150KB, the performance impact would probably be minimal.

    Disclaimer: I was doing this as a pet project in my spare time, so take these numbers with a grain of salt.

  • by polyp2000 ( 444682 ) on Thursday January 22, 2004 @12:06PM (#8055406) Homepage Journal
    >This is especially an issue on embedded platforms where RAM is at a premium... What kinds of embedded platforms are likely to be needing greater than 4gb RAM anyhow? I sure as hell cant imagine a use for a 64bit washing machine with upwards of 4gigs .. Thats a hell of a lot of washing programmes.
  • by isj ( 453011 ) on Thursday January 22, 2004 @04:51PM (#8059410) Homepage
    A few years back I did a test with a server which store state information (I will not bore you with the details). I did some performance test on both the 32-bit version and the 64-bit version. Same source code. Same test data. Same configuration. On HP-UX 11.0 PA-RISC with the aCC compiler.
    The 64-bit version used about 15% more memory than the 32-bit version. But it was also 20% percent faster. That still puzzles me, because the server does not perform any 64-bit operations.
  • by gillbates ( 106458 ) on Thursday January 22, 2004 @07:08PM (#8060646) Homepage Journal

    Seriously, it is faster. I've been writing in assembly for years, and unless I need a 32 bit pointer, I generally don't use them.

    If you're that concerned about performance that you are analysing pointer size, you might as well code in assembly. Yes, 64 bit pointers have a bigger footprint, but we experienced the same problem when we went to unicode strings, 32 bit code, etc...

    My advice is this: let the compiler deal with it. Unless you are willing to crank out a lot of hand-coded assembly or are interfacing with hardware, the 32/64 bit pointer question is pretty much moot. As it is, you can't control:

    • Where your linker places segments in the loaded image. Trust me, this is a big source of cache misses on the older processors where the libraries were in one area of memory and the running code in another.
    • The optimal ordering of instructions to keep the U and V pipelines of the processor filled. Some of the modern compilers can do this pretty well, but you can never be too sure. The number of clock cycles an instruction takes can vary by a factor of 3, so unless you're willing to learn some pretty hardcore assembly, you're stuck with whatever the compiler gives you.
    • The instruction level optimization of the compiler. Intel's new C++ compiler will turn the familiar array initialization code:

      for (int x = 0; x < 256; x++)buffer[x] = 0;

      Into something like this:

      mov cx,64
      mov eax,0
      mov si,buffer
      cld
      rep stosd


      Instead of the literal translations of the old compilers:

      mov si,buffer
      mov bx,0 ; this is the x variable
      forlabel@10001:
      mov [bx + si],0
      mov ax,1
      add ax,bx
      xchg bx,ax
      cmp bx,256
      jl forlabel@10001


      The former takes 68 instruction cycles, the later takes (6 * 256 + 2) = 1576!

    The aforementioned issues have a much bigger impact on performance than pointer size. Given that the memory bus is at least 64 bits wide on anything newer than a pentium, you won't incur a clock cycle penalty for using 64 bit pointers.

    The only thing that I would suggest is to watch where you place pointers in structures. For example, when building a linked list, you would want to do something like this:
    class link {
    link * ptrforward;
    link * ptrbackward;
    link * ptrdata;
    }
    rather than:
    class link{
    link * ptrdata;
    link * ptrbackward;
    link * ptrforward;
    }
    Because the processor pulls 64 bits per address accessed, the former structure would have the forward pointer in cache regardless of the pointer size. With the second structure, traversing a list in the forward direction would result in a cache miss on every node visited, regardless of pointer size (This applies only to the x86...).

    My experience has been that pointer size is only relevant on truly tiny systems - for example, 16 bit code which has to fit into a few kilobytes. Usually, as programs scale to work with larger datasets, the percentage of memory used for pointers decreases rapidly. You'll find that as data sizes increase, the practical uses for linked structures shrink; locating an element by using a binary search on a sorted array scales much better than a linear search traversing linked list.

  • This is a dumb question. Do you really think that on a system with more than 4GB of memory that memory would be at such a premium that an additional four bytes per pointer would even be noticeable? Surely you jest!

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...