Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
GUI KDE Software

Optimizing KDE 3.1.x 83

David Lechnyr writes "This article goes into detail on optimizing KDE for speed. Typically, most distributions include pre-compiled binaries of KDE which are optimized for an Intel i386 computer. Chances are that you're running something faster than this; if so, this should help you tweak the compile process to speed things up a bit."
This discussion has been archived. No new comments can be posted.

Optimizing KDE 3.1.x

Comments Filter:
  • by dotgod ( 567913 ) on Wednesday April 23, 2003 @06:58PM (#5794830)
    Gentoo [gentoo.org] does all this by default. To compile and install optimized binaries for kde, you just type "emerge kde"
    • by Anonymous Coward
      But what do you do for the next eight hours, huh?
    • by Anonymous Coward
      Too bad Gentoo's optimizations have been known to break software by being too agressive.
      • So, simply adjust the optimization settings. RTFM.
      • It's up to you to choose which level of optimization to use(and how agressive you want it to be), but, definitely, if you use the flags in the gentoo documentation, you'll be reasonably sure to have a package to be compiled correctly and to run fast!!!
      • As mentioned elsewhere, the optimisation settings are user defined.

        But if these settings are breaking the software then surely it's the fault of the gcc compiler or the software in question?

        Gentoo's ebuild are often just scripts, the actual software is often downloaded direct from the website of the actual developer of the software.
    • I don't get it, why are the instructions telling you to get the raw tarballs and 'make install'? Why not get the source packages from whatever distribution you use and rebuild them with different compiler flags?

      It sure would be useful to have a single command equivalent to Gentoo's 'emerge' which means 'download the source packages, build them with my compiler switches and install'. Does any RPM- or dpkg-based distribution have this?
  • prelink (Score:5, Interesting)

    by dotgod ( 567913 ) on Wednesday April 23, 2003 @07:01PM (#5794855)
    Using prelink [freshmeat.net] will also provide additional optimization.
    • Re:prelink (Score:4, Informative)

      by IIEFreeMan ( 450812 ) on Wednesday April 23, 2003 @07:04PM (#5794879)
      Prelink is not useful anymore provided you use a recent glibc (>= 2.3) ...

      This is done automatically by the libc and the dynamic linker.
      • Re:prelink (Score:4, Informative)

        by Spy Hunter ( 317220 ) on Wednesday April 23, 2003 @11:03PM (#5796262) Journal
        This is simply not true. Here's the truth straight from the source: glibc NEWS file [gnu.org]. It says you need "additional tools" to take advantage of prelinking, and the "prelink" program is that additional tool. I have also heard other people say that prelinking is not necessary anymore, but they were wrong. Prelinking my KDE binaries on Debian unstable resulted in a noticable startup performance increase. I hope this misinformation doesn't cause people to discount prelinking as a possible performance booster.

        FYI, prelinking KDE is not easy. On Debian the QT package has OpenGL support compiled in. The OpenGL library is not prelinkable because it is not PIC (Position Independent Code). Since all KDE applications are linked to QT and thus to OpenGL indirectly, this also means that all of KDE isn't prelinkable. I don't know of any KDE app that actually uses QT's OpenGL support, so I don't know why it is compiled in. To prelink KDE I had to compile my own version of QT without OpenGL support. This works to allow prelinking, but using a a version of QT compiled with different options makes QT's style plugins not work and has other disadvantages. There are two real solutions:

        1. Compile OpenGL as PIC - I don't know why it isn't already.
        2. Compile QT without OpenGL support, and provide separate packages for people who need OpenGL support.
        I've sent emails to the debian-kde list about prelinking the Debian KDE packages, but the maintainers didn't seem interested. Hopefully they will eventually see the light and start working toward prelinking KDE.
        • Re:prelink (Score:5, Informative)

          by JohnFluxx ( 413620 ) on Thursday April 24, 2003 @01:07AM (#5796715)
          >I've sent emails to the debian-kde list about prelinking the >Debian KDE packages, but the maintainers didn't seem >interested. Hopefully they will eventually see the light and >start working toward prelinking KDE.

          I looked at the discussion on prelinking in Debian, and it's not all such a straight forward issue.

          When you have a binary, and run it, it loads all the libraries that the binary uses. When it loads the libraries, it basically puts it into memory, and then tells the binary the memory address of everything in the library. I think this is things like functions, data structures, etc.
          Anyway, prelinking is when you now modify the binary, and tell it about the particular version of the libraries that it links (say version 1.0.3 or whatever) Now when you run the binary and use that particular version of the library, it loads the library into a specific memory address, and the binary already knows the memory address of all the functions and data structures.
          This speeds up loading time and saves memory.
          If the library version changes, then it falls back on the old method.

          Now, the trouble is, when you update a library, you must update all the binaries. This means (as far as I see it) either you also update all the appropriate binaries by running prelink again on all the binaries, or you update the packages the binaries are in.
          The second option would cause libraries to have huge number of dependancies, and would make minor upgrades of libraries horrendous for dial up users.
          The first option has slightly more subtle problems. The problem is that it means when you update a library, it goes and unpredictably modifies binaries. This plays absolute havoc with things like tripwire, and any kind of security.

          This is merely my understanding from 5 mins research, so take it as you will.
          • I'm not suggesting that they prelink the libraries inside the packages, I would just be happy if they distributed binaries which could be prelinked by users who wanted to install the prelink package. The debian prelink package is easy to use, and optional. Just apt-get install prelink, and then run prelink -a to prelink all your binaries. I'm not sure, but it might even register a cron job to automatically prelink stuff regularly. If you're running tripwire or something like that you don't have to insta
          • Re:prelink (Score:4, Insightful)

            by IamTheRealMike ( 537420 ) on Thursday April 24, 2003 @05:35PM (#5803677)
            To be more accurate, prelinking allocates each library a unique location in virtual address space and then stores the precalculated GOT (or is it the PLT) in a new section in the binary. That means you only need to do a few links, instead of a lot (due to the ELF fixup semantics, sometimes you get conflicts which must always be linked at runtime).
      • Re:prelink (Score:2, Insightful)

        by twener ( 603089 )
        You confuse objprelink [sourceforge.net] with the real prelink [redhat.com].
  • portage ? (Score:1, Offtopic)

    by Bobas ( 581631 )

    Hey, all you need here is a decent port system such as FreeBSD's or Gentoo's portage. Not much to see here, i guess.

    On the other hand compiling unpatched source with experimental optimization flags for your system is just asking for trouble.

    • Re:portage ? (Score:3, Interesting)

      by brad-x ( 566807 )

      Absolutely. Using either of these technologies allows one to compile any piece of software using the best possible optimizations for your system.

    • Re:portage ? (Score:5, Interesting)

      by dh003i ( 203189 ) <`dh003i' `at' `gmail.com'> on Wednesday April 23, 2003 @08:10PM (#5795391) Homepage Journal
      Experimental? Come on. Maybe -O3 and then a series of other additional optimizations beyond that are experimental. But the sane optimizations most people use are not experimental. I, myself, default to:

      -march=athlon-tbird -Os -pipe -fomit-frame-pointer

      Btw, compiling everything from scratch is hardly "unstable". That's what FreeBSD does. Furthermore, memory optimizations often-times increase system-stability, by reducing the likelihood of situations where there isn't enough RAM. Furthermore, some of the USE settings increase stability by eliminating compiled-in support for crap that you don't use. If you don't use something, and support for it is compiled in, it's just useless crap that has the potential to reduce stability.
      • I don't consider -03 experimental. The only thing you get with 03 over 02 is -fomit-frame-pointer on platforms where -fomit-frame-pointer has no effect on debug information. So, on i386, -O3 -fomit-frame-pointer is the same as -O2 -fomit-frame-pointer.
    • Re:portage ? (Score:5, Informative)

      by larry bagina ( 561269 ) on Wednesday April 23, 2003 @11:23PM (#5796347) Journal
      it's not about "experimental optimizations", it's about beling able to select instructions optimized for your CPU.

      the difference between a 386, 486, and pentium I-IV isn't just clockspeed and MMX, a handful of new instructions have been added. If you don't specify the arcitecture, you'll generate i386 compatable code.

      so if (i == 0) i = 1234; will generate code like this:
      cmp eax,0
      jne L1
      mov eax, 1234
      L1:

      A PII however, can do this:
      cmp eax,0
      cmove eax,1234

      that might not look all that much better, but branches are a huge bubble in the pipeline, and are horrible for performance.

  • by ewomack ( 225766 ) on Wednesday April 23, 2003 @07:43PM (#5795218) Homepage
    See http://www.sourcemage.org/ [sourcemage.org] All source, downloaded from the authors site and compiled to the settings and optimizations YOU choose.
  • Gentoo... (Score:4, Insightful)

    by dh003i ( 203189 ) <`dh003i' `at' `gmail.com'> on Wednesday April 23, 2003 @08:02PM (#5795347) Homepage Journal
    "Does" this all for you. By "does", I mean that the install guide forcefully tells you to alter your /etc/make.conf file to include support for the features you want and the optimizations you want.

    Btw, you don't just sit around for 8 hours waiting for something to compile. If you're in CLI-only, you do the following:

    emerge screen
    screen emerge kde
    C^A C^C


    Then you do whatever else you want to do. I recommend getting IRSSI and Lynx for internet-amusement.

    If you already have Xfree86 setup, then you do the following:

    emerge ratpoison
    C^A C^C


    Then run whatever graphical X-programs you want in your new ratpoison windows.

    This is the beauty of a *modern day* multi-tasking OS like GNU/Linux. This isn't the same crap as Micro$oft. You can compile something AND do other things at the same time, since memory management is great as is multi-tasking (depending on your kernel and compile options for the kernel). Try compiling something using MS's compilers and doing something else at the same time. I compiled WindowMaker, for example, while doing other tasks in ratpoison.

    As for compile-time optimizations, I recommend the following:

    CFLAGS = -march=cpu_type -Os -pipe -fomit-frame-pointer

    That will optimize for small-size binaries and minimal RAM usage. I recommend the -Os optimization for the vast majority of applications, most of which are not CPU-intensive. That includes WMs, DEs, word-processors, spreadsheets, internet browsers, e-mail programs, GIMP, etc. I recommend -O2 for things which are CPU-intensive, like video/sound players, video/sound encoders, DNA/AA sequence alignment, and bayesian phylogenies.

    Make sure to

    man gcc

    So you know what your doing. Hint: once you hit a certain point with optimization, you can't have it both ways. Higher levels of optimization involve trading a memory/speed tradeoff, and you can go one way or the other. As I suggested before, I suggest memory optimizations for non-CPU intensive programs (the one's you'll probably be using all of the time, thus which'll be clogging up your memory); and speed optimizations for CPU-intensive programs, which you probably won't use as much.
    • This is the beauty of a *modern day* multi-tasking OS like GNU/Linux.

      For the most part I'd agree with you. However, when there is a lot of disk IO, it will still bind up some. Not so bad with SCSI disks, but IDE drives, even with DMA you will run into problems.

      When gentoo is doing the copying part it can still be pretty chunky on the desktop. Granted usually that is for a fairly short period of time. During the build its pretty reasonable, as it should be.
  • Why aren't more binary releases compiled for i686 or at least i586? For command-line applications, I can understand compiling for i386. Many of us have a 386/486 PC running as a router or for some other use. But you would think that with the horsepower required by KDE, binary releases would be i586 or i686 by default.
    • Some distros do. But one problem is the code that the GCC generates. Gentoo warns you to not use march=pentium4, and to use i686 instead, because pentium4 can cause bad code. I guess that older versions of GCC may have had the same problem for i686 at one point. Who knows...
    • Some distro's do. Mandrake for instances compiles for Pentium and above (i586) because it considers itself a "desktop" form of linux and noone in their right mind is using less than a pentium for a desktop system. Redhat, Debian, and the like are need to be more platform independent because they are used on all kinds of hardwaref for all kinds of applications. Redhat has platform specific rpms for important thinks like glibc, the kernel, etc but a major selling point on linux is that it will run out of t
      • Durn, my RedHat9 CDs won't install on my Mac! So much for run on anything out of the box. ;-)

        I really don't agree with this one-Linux-fits-all approach. Separate RedHat's (or whatever) for Desktop and Server use would be great. Which they do now, I suppose; perhaps in RedHat X we'll see better optimizations for desktop usage.

        Of course, as I advocate this splitting of distros, I also hate the way we have 20 distros to do the same thing, and they all do it half-assed... ;-) Lindows/Xandros/Lycoris/Ark/e
  • by SN74S181 ( 581549 ) on Wednesday April 23, 2003 @09:21PM (#5795741)
    This optimizes the hell out of KDE, and it reduces the memory footprint as well. It's such a simple script that I include it right in my ~/.cshrc file:

    alias kde twm

    You can substitute fvwm2 or some other window manager if you're not a tab enthusiast.
  • How much? (Score:3, Insightful)

    by ariels ( 6608 ) <ariels&alum,cs,huji,ac,il> on Thursday April 24, 2003 @02:56AM (#5797021)
    How much performance increase are we talking here? Faster startup times? Better response times? Perceived better response?

    There's no reason to work hard without knowing what the benefits are. For that matter, say I do all this and it doesn't seem to work any faster. How do I know if I did something wrong, or if there is nothing to measure?
    • I'll tell you that , coming from three years of different precompiled distros(redhat,mandrake, debian,slack), all with the last recompiled kernel, six month ago I tried to install gentoo: when I first started my new distro I found a "visible" general improvement, in every task I did, and also in startup procedure. This is why, from now on, I'm not going to change my distro anymore, I'm just too happy with gentoo...
  • Konstruct (Score:3, Informative)

    by twener ( 603089 ) on Thursday April 24, 2003 @04:55AM (#5797356)
    Why not use Konstruct [kde.org] instead of typing all this manually?
  • KDE performance tips (Score:3, Informative)

    by twener ( 603089 ) on Thursday April 24, 2003 @05:15AM (#5797403)
    I don't see where this article is talking about optimizing other then self-compiling. Better read the KDE performance tips [sh.cvut.cz].
  • Hmmmm (Score:1, Funny)

    by Lukey Boy ( 16717 )
    So far this [enlightenment.org] is the best way I've found to speed up KDE.
    • Re:Hmmmm (Score:1, Informative)

      by Anonymous Coward
      Enlightenment is hardly light weight now is it? I always recommend the following to those who want a faster desktop experience:
      lwm [freshmeat.net]
      PekWM [freshmeat.net]
      WindowLab [freshmeat.net]
  • The claim about distributions not optimizing for newer CPUs is not true. They usually use something like -march=i486 -mcpu=i686, which means it uses instructions that at least i486 knows, and optimizes them for i686 CPUs. I personally doubt recompiling KDE with better compile flags that those in distro shipped packages makes noticeable difference. Things like prelink, O(1) scheduler, better mapping of binaries into memory, or code optimizations of KDE are things that may make difference.

    BTW, even this ma

    • Try Gentoo, compiling everything from source made a massive , noticable improvement with KDE on my dual athlon system! much faster than my previous SuSe install!
      • But the thing that made the different most probably wasn't KDE recompile, but instead some of the other stuff I listed ( O(1) scheduler, prelink, whatever). Or maybe it was so much faster just because you believed it must be so much faster after you recompiled it.
      • I'd love to, but I'm stuck on a dial-up connection. Is there any way to get a snapshot of all Gentoo on CD-ROM or DVD? I've been checking out the Linux stores without any luck.
    • Actually, I was running Redhat 7.2 and their i386 binaries. KDE was unusable. Even Blackbox and Fluxbox had very poor GUI response. After switching to Gentoo I can use KDE 3.1.1 with all the bells and whistles on a 600Mhz laptop no problem, transparent menus and everything. The only thing I can attribute to the huge increase in graphican peformance itis compiling the whole system from source. Not to mention, doesn't using things like MMX,3DNOW, SSE, etc speed up graphical applications as long as the a
  • Intel's compiler brags that it is totally compatible with gcc and gives a 30% speed increase to the final result on an Intel processor.

    On my code I have seen that to be very true - but I have never bothered trying it on "other people's" code.

    How much of it can you compile with that, and what sorts of speed increases (if any) do you see?

    I would imagine there isn't much speed increase if it won't even successfully compile of course - but if it works, then is it noticably better?

    And I suppose this is only
    • Some ICC support fixes have been comitted to CVS HEAD recently.
    • "Intel's compiler brags that it is totally compatible with gcc and gives a 30% speed increase to the final result on an Intel processor."

      That's only true for pre-GCC 3.1 releases. GCC 3.2 can produce code that rivals that of Intel C++. GCC 3.1 and 3.2 contain lots and lots of x86 optimization improvements.
      • Not on my code. I'm using 3.2.2, and there is still a lot of C++ code where Intel C++ will easily be over 20% faster. A lot of it has to do with high-level optimizations. You throw complex/abstracted code at GCC, and you lose a significant amount of performance compared to low level C type code. Meanwhile, ICC will compile away most of that abstraction, and you'll get code that's as fast as the equivilent C code. Now both compilers optimize very well, and a performance diff of 20% in certain cases really is
  • why not use the intel compiler while you at it, it's much better than GCC for intel processors?

Don't panic.

Working...