Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
KDE GUI Graphics Programming Linux

KDE and Canonical Developers Disagree Over Display Server 202

sfcrazy (1542989) writes "Robert Ancell, a Canonical software engineer, wrote a blog titled 'Why the display server doesn't matter', arguing that: 'Display servers are the component in the display stack that seems to hog a lot of the limelight. I think this is a bit of a mistake, as it’s actually probably the least important component, at least to a user.' KDE developers, who do have long experience with Qt (something Canonical is moving towards for its mobile ambitions), have refuted Bob's claims and said that display server does matter."
This discussion has been archived. No new comments can be posted.

KDE and Canonical Developers Disagree Over Display Server

Comments Filter:
  • logic (Score:5, Insightful)

    by Anonymous Coward on Monday March 24, 2014 @01:51PM (#46565545)

    If they don't matter, why mir?

    • Re:logic (Score:4, Insightful)

      by batkiwi ( 137781 ) on Monday March 24, 2014 @06:17PM (#46568903)

      They're saying that it doesn't matter to an app developer if you're using a middleware framework, as most developers do, because the eventual output on the display will be the same.

      The reasons for introducing mir are performance, ability to run on low footprint devices, and cross device compatability.

      So their point is that X11 vs wayland vs mir vs framebuffer vs blakjsrelhasifdj doesn't matter to a developer using the full QT stack. Their write their app to QT, and then developers on QT write the backend to talk to whatever the end user is using. It's more work for QT/other frameworks, but "should" be "no" more work for an app developer.

      • The reasons for introducing mir are performance, ability to run on low footprint devices, and cross device compatability.

        Jolla would like to know why the need for Mir when they have a Wayland compositor and window manager running on low-end/mid-range mobile devices with excellent (compared to other similar-spec devices) performance.

        • by batkiwi ( 137781 )

          Jolla would like to know why the need for Mir when they have a Wayland compositor and window manager running on low-end/mid-range mobile devices with excellent (compared to other similar-spec devices) performance

          I have no idea, and I don't pretend to. I was pointing out that the +5 rated comment I replied to was not insightful and was missing the point of the original article. It was talking to app developers, not framework/OS/etc developers.

  • Personal blog (Score:3, Informative)

    by Severus Snape ( 2376318 ) on Monday March 24, 2014 @01:52PM (#46565571)
    NOTHING to do with Canonical at all. Yay for the let's all hate Canonical bandwagon.
    • Re:Personal blog (Score:5, Insightful)

      by sfcrazy ( 1542989 ) on Monday March 24, 2014 @02:02PM (#46565677)
      He is a Canonical developer and its not a post about his family cat.
    • Re: (Score:2, Troll)

      by Tailhook ( 98486 )

      NOTHING to do with Canonical at all.

      Yet there is Mark Shuttleworth, replying the same day [google.com] to this supposedly "personal" blog with:

      It was amazing to me that competitors would take potshots at the fantastic free software work of the Mir team

      But hey... that's Google+, not ubuntu.com or whatever, so that's got nothing to do with Canonical either. Right?

  • no, really? (Score:2, Funny)

    by X0563511 ( 793323 )

    Interesting how KDE and those responsible for Unity have differing perspectives... who would have thought?

  • Bollocks (Score:4, Insightful)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday March 24, 2014 @02:10PM (#46565771) Homepage Journal

    The display server is hugely important. The fact that the user doesn't know they're using it is irrelevant, because they're using it at all times.

  • Shh... (Score:4, Insightful)

    by GameMaster ( 148118 ) on Monday March 24, 2014 @02:14PM (#46565807)

    You heard the man, it's not important. Now stop talking about it! That way Canonical can more easily save face when they cancel their failed cluster-fuck of a display server and switch back to Wayland...

    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
      • Re:Shh... (Score:5, Insightful)

        by JDG1980 ( 2438906 ) on Monday March 24, 2014 @03:12PM (#46566521)

        X.org, not Wayland. Wayland is still under development. Wayland devs must be elated that Mir has made the debate "Wayland vs Mir" rather than "Tried, trusted, works, and feature complete X.org vs Wayland."

        X.org is not "feature complete" in any meaningful sense. It is incapable of doing the kind of GPU-accelerated, alpha-blended compositing that is expected on a modern user interface. Sure, you can get around most of this by ignoring all the X11 primitives and using X.org to blit bitmaps for everything, with all the real work done by other toolkits. But in that case, it's those other toolkits doing the heavy lifting, and X.org is just a vestigal wart taking up system resources unnecessarily.

        • Re:Shh... (Score:5, Informative)

          by Eravnrekaree ( 467752 ) on Monday March 24, 2014 @04:14PM (#46567249)

          This is all wrong. X has something called GLX which allows you to do hardware accelerated OpenGL graphics. GLX allows OpenGL commands to be sent over the X protocol connection. X protocol is sent over Unix Domain Sockets when both client and server are on the same system, this uses shared memory so it is very fast, there is no latency of network transparency when X is used locally in this manner. MIT SHM also supports forms of shared memory for transmission of image data. Only when Applications when they are being used over a network, do they need to fall back to send data over TCP/IP. Given this, the benefits of having network transparency are many, but there is no downside because where an application is run locally, it can use the Unix domain sockets, the MIT SHM and DRI.

          X has also had DRI for years which has allowed an X application direct access to video hardware.

          As for support for traditional X graphics primatives, these have no negative impact on the performance of applications which do not use them and use a GLX or DRI channel instead. Its not as if hardware accelerated DRI commands have to pass through XDrawCircle, so the existance of XDrawCircle does not impact a DRI operation in any significant way. The amount of memory that this code consumes is insignificant, especially when compared to the amount used by Firefox. Maybe back in 1984 a few kilobytes was a lot of RAM, that is when many of these misconceptions started, but the fact is, these issues were generally found with any GUI that would run on 1980s hardware. People are just mindlessly repeating some myth started in the 1980s which has little relevance today. Today, X uses far less memory than Windows 8 does and the traditional graphics commands consume an insignificant amount that is not worth being worried about, and which is needed to support the multitude of X applications that still use them.

          • Re:Shh... (Score:5, Insightful)

            by BitZtream ( 692029 ) on Monday March 24, 2014 @04:31PM (#46567441)

            Today, X uses far less memory than Windows 8

            Nice, you just compared a single process on one OS to the entire OS and its subprocesses of another. Totally fair.

            How about you compare X to the Win32 Desktop Window Manager instead? Which is a lot closer, though still not exact since Windows has this mentality that GUI in the kernel is a good idea.

            My point however is that your comparison is not really a comparison.

          • I also forgot to mention X has had the X Composition Extension and X Render Extension which have allowed for alpha blending operations for quite some time. Your information is a bit out of date.

          • You did mention hardware accelerated compositing, and I wanted to clarify that X protocols can indeed support this, it is mainly internal improvements in the X server that may be needed to support them. You dont really need an entirely new windowing system for this.

          • by caseih ( 160668 )

            Think you need to watch Daniel Stone's presentation on why X11, well, sucks: https://www.youtube.com/watch?... [youtube.com]

            Long story, X11 has been hacked to add things like GLX and composite, and these things go around the X protocol essentially. X is pretty much a complicated and poorly-working IPC nowadays. Yet even if you removed all the cruft, you'd be left with the fact that X makes a very poor IPC mechanism. Also with GLX and compositing, X is no longer network transparent. It's network-capable, but it's not

            • by sjames ( 1099 )

              I tunnel over ssh to a remote server that runs an X application which pops open a window on my workstation that is indistinguishable from any other window on my desktop. That includes cut/paste just working. Try that with any of the alternatives you suggested. Now try throwing Xpra into the mix. Good luck with that.

              Yes, perhaps once Wayland quits hand waving and spending all it's time talking about how it's going to cram itself down everyone's throat, it might get some traction.

        • by Kjella ( 173770 )

          Also tear-free video seems to be one god awfully big workaround for limitations in X. The stated goal of Wayland was a system in which "every frame is perfect, by which I mean that applications will be able to control the rendering enough that we'll never see tearing, lag, redrawing or flicker." I doubt he'd say that if X had no tearing lag, redrawing or flicker which seems like rather huge deficiencies to me.

          • by sjames ( 1099 )

            Except it doesn't really come up in X except under conditions Wayland doesn't even handle. It's easy to design a car that never has a fatal crash, just leave out the wheels and engine.

  • by Dcnjoe60 ( 682885 ) on Monday March 24, 2014 @02:24PM (#46565909)

    Just one question. If the display server is of such minimal importance in the big scheme of things, then why did Canonical develop their own?

  • Namely gui toolkit developers, driver developers and DE developers. All of the above aren't very fond of MIR..
  • Fact is that I can remotely control almost any computer running on almost any platform utilizing countless different variations of the theme of 'render to the display - wait, let's render to an image instead then send it over the wire'

    The reason so much attention is being put on display servers is as a distraction from the real problems, such as the fact that so much attention is being put on the display servers. They're not the weak point, there are a lot of them, and one exercise that remains THROUGHOU
    • by amorsen ( 7485 )

      I believe the main difference is that remote X is rootless. People like that. Somehow they forget that remote X is non-persistent, uselessly slow, and that session integration is almost entirely missing.

      Do not misunderstand me, I would love a persistent rootless remote display with decent performance and session integration. Alas, X is not it.

  • by timeOday ( 582209 ) on Monday March 24, 2014 @02:47PM (#46566161)
    The most significant transition of a unix-style OS to the desktop is OSX. The most significant transition of a unix-style OS to handhelds is Android. X was left behind both times. Why did they re-invent the wheel if there was no need to do so?
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Not only that, but each example (NeXT/OSX and Android) are undeniable success stories.

      X11 has severe limitations, like a cramped network abstraction layer that can't share windows or desktops with multiple people. Supposedly the NX server gets around this, but the X11 people haven't shown any interest in adopting the NX features.

      People need displays that look like they computer is operating smoothly (instead of barfing text-mode logs here and there when transitioning between users, runlevels, etc).

      People ne

    • Re: (Score:3, Interesting)

      by Uecker ( 1842596 )

      And both are now incompatible ecosystems. Do we want to repeat this nonsense?

    • WRT to OSX, there is history. Back in the days of NeXT, Jobs & co. decided to use Display Postscript for a variety of reasons. A few of the reasons: X back then was huge, ungainly and a total beast to work with using the limited memory and cycles available (The NeXTstation used a 25MHz 68000); their team were not ever going to be able to morph X into an object-oriented platform, which NeXT definitely was; Display Postscript was Adobe's new Hotness; the NeXT folks could write drivers for DP that worked with the Texas Instruments signal processor (TM-9900? I forget), which was truly amazingly fast at screen manipulation; and the X architecture didn't fit well with either Display Postscript or the TM-9900.

      In 2001 I had a NeXTstation that I added some memory and a bigger disk to. The machine was by then more than 10 years old. For normal workstation duties, it was faster than my brand new desktop machine due entirely to the display architecture. But compiling almost anything on that 25MHz CPU was an overnight task - I had one compile that ran three days.

    • Its the not made here syndrome, plus the fact that Google and Apple want to create a fleet of applications that are totally incompatable with other platforms in order to create user lock in to their respective platforms. Obviously, business and political reasons and nothing to do with technical issues. X would have been a fine display platform for either but, then the platforms would be compatable with mainstream Linux distros and you would have portable applications so your users wouldnt be locked into you

  • Is there an actual story here, or it just about two different groups of open-source developers having a difference of opinion on whether display servers are important or not? The summary doesn't suggest this disagreement is having any real ramifications on Ubuntu/Kubuntu.

  • He's Right (Score:4, Insightful)

    by Luthair ( 847766 ) on Monday March 24, 2014 @02:51PM (#46566239)
    The canonical developer said that users don't care which I think is pretty accurate. The majority of users won't care as long as applications run and are responsive.
  • I've been a KDE user for a very long time, hated Gnome. Frankly I hate Unity even more Gnome (which is a lot). I've seen KDE do things that Microsoft can't, using less CPU and overall better performance, and it's always been compatible with X. So now we have a nextgen X and Canonical want's to disperse the market. Nothing new there, they did it with Unity. Fragmentation is good for some people, and I have to wonder if Canonical gets paid to cause fragmentation? Sure, they have a product that is "their

  • wayland, systemd (Score:5, Interesting)

    by bzipitidoo ( 647217 ) <bzipitidoo@yahoo.com> on Monday March 24, 2014 @03:30PM (#46566719) Journal

    Figured systemd would get dragged into this.

    One of the biggest problems with systemd is simply documentation. System administrators have a lot of learning invested in SysV and BSD, and systemd changes nearly everything. Changing everything may be okay, may be good, but to do it without explanation is bad no matter how good the changes. I'd like to see some succinct explanation, with data and analysis to back it up. Likely there is such an explanation, and I just don't know about it. But the official systemd site doesn't seem to have much, I'd also like to see a list with common system admin commands on one side, and systemd equivalents on the other, like this one [fedoraproject.org] but with more. For example, to look at the system log, "less /var/log/syslog" might be one way, and in systemd, it is "journalctl". To restart networking it might be "/etc/rc.d/net restart", and in systemd it's "systemctl restart network.service". Or maybe the adapter is wrongly configured, DHCP didn't work or received the wrong info, in which case it may be something like "ifconfig eth0 down" followed by an "up" with corrected IP addresses and gateway info.

    When information is not available, it looks suspicious. How can we judge if systemd is ready for production? Is well designed? And that the designers aren't trying to hide problems, aren't letting their egos blind them to problems? To be brusquely told that we shouldn't judge it we should just accept it and indeed ought to stop whining and complaining and be grateful someone is generously spending their free time on this problem, because we haven't invested the time to really learn it ourselves and don't know what we're talking about, doesn't sit well with me.

    Same goes for Wayland and MIR. Improving X sounds like a fine idea. But these arguments the different camps are having-- get some solid data, and let's see some resolution. Otherwise, they're just guessing and flinging mud. Makes great copy, but I'd rather see the differences carefully examined and decisions made, not more shouting.

    • SailfishOS, running on the current Jolla device, is quite smooth and nice, in a way that my N9 (despite the slickness of the design of the UI) never was. Both were underpowered hardware for their times, but Wayland allows the kinds of GPU-accelerated and compositing-oriented display that allow for what people are increasingly used to from other OSes.

      Now, in terms of systemd I'm more on your side, there's certainly a baseline of arrogance that the primary devs have shown. On the other hand, they seem sometim

      • by Golthur ( 754920 ) on Monday March 24, 2014 @05:02PM (#46567887)

        My main issue with systemd is that it is monolithic; it violates the fundamental Unix philosophy in a most egregious way, and whenever anyone comments on this, we are (to quote the GP) "brusquely told that we shouldn't judge it we should just accept it and indeed ought to stop whining and complaining and be grateful someone is generously spending their free time on this problem, because we haven't invested the time to really learn it ourselves and don't know what we're talking about".

        We used to have separate, replaceable systems for each aspect of systemd - e.g. if you didn't like syslog, there was syslog-ng, or metalog, or rsyslog; each different and meant for a different purpose. Now, it's "all or nothing" - except that it's becoming progressively more difficult to opt for "nothing" because it's integrating itself into fundamental bits like the kernel and udev.

      • Wayland allows the kinds of GPU-accelerated and compositing-oriented display that allow for what people are increasingly used to from other OSes

        No, it doesn't. That's nothing to do with Wayland at all. Wayland is just a compositor that allows you to manage pixel buffers (and when they are ready) and input devices.

        Any hardware GPU accelerated stuff is supported by OpenGL or other parts of the stack, just like on Xorg.

  • by Eravnrekaree ( 467752 ) on Monday March 24, 2014 @06:47PM (#46569223)

    Obviously, display server does matter to users. If users cannot use a whole set of applications because they are not compatable with Distro Xs display server, that is a problem for users. This can be addressed by distros standardizing around display servers that uses the same protocol. Its also possible, but more complex, is if distros using different display protocols support each others display protocols by running a copy of a rootless display server that supports the others display protocol. Relying on widget sets to support all display protocols is too unreliable as we are bound to end up with widget sets which do not support some display protocols. Needless to say, it is best to have a single standard, it would have been easiest and best if Canonical had gone with Wayland and actually worked with Wayland to address whatever needs they had.

    Its also true a new display protocol wasnt really necessary. The issue with X was the lack of vertical syncronisation. X already has DRI, Xrender, Xcomposite, MIT SHM, and so on for other purposes. An X extension could have been created to allow a way for an application to get the timing of the display, the milliseconds between refreshes, the time of the next refresh, etc.. X applications could then use this timing information, starting its graphics operations just after the last refresh, X applications could then use an X command to place its finished graphics pixmap for a window into a "current completed buffer" for the window, allowing for double buffering to be used. This could be either a command to provide the memory address, or a shared memory location where the address would be placed. All of the current completed buffers for all windows are then composited in the server to generate the master video buffer for drawing to screen. There is a critical section during which the assembly of the master video buffer would occur, any current completed buffer swap by an application during that time by an application would have to be deferred for the next refresh cycle. A new XSetCompletedBuffer could be created which would provide a pointer to a pixmap, this is somewhat similar to XPutPixmap or setting the background of an X Window, but provided that XPutPixmap might do a memory copy it may not be appropriate, since the point is to provide a pointer to the pixmap that the X server would use in the next screen redraw. Said pixmaps would be used as drawables for opengl operations, traditional X primatives, and such. This scheme would work with all of the existing X drawing methods. the pixmaps are of course transferred using MIT SHM, its also possible to use GLX to do rendering server side, for use of x clients over the network, GLX is preferable, otherwise the entire pixmap for the window would have to be sent over the network. The GLX implementation already allows GL graphics to be rendered into a shared memory pixmap. Currently however, some drivers do not support GL rendering into a pixmap, only a pbuffer, which is not available in client memory at all, however, the DRI/GEM stuff is supposed to fix this and the X server should be updated to support GLX drawing to a pixmap with all such DRI drivers.

    Another issue is window position and visibility in how it relates to vertical synchronization. Simplistically the refresh cycle can be broken into an application render period and a master render period. If the X server has a whole pixmap buffer of a window, it grabs at a snapshot of the display window visibility/position state the beginning of the master rendering period and uses that to generate the final master pixmap by copying visible regions of windows into the master buffer.

    It can be a good idea to allow the option for applications to only render areas of their windows that are visible, this saves on CPU resources and also avoid needless rasterization of offscreen vector data. In order to do this, applications would need to access visibility data at the beginning of the application render period. Applications would then have to, instead of providing a single

  • "Refuted" doesn't mean that someone disagrees. "Refuted" means that someone has conclusively proved that something is incorrect.
  • Equal parts incompetent and arrogant. They're like a fledgeling Microsoft, except now everybody else is moving on from yesteryear and won't waste time indulging such foolishness anymore.
  • Why aren't you linking to the blogpost, instead of the article quoting the blogpost?

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...