Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
KDE GUI Graphics Programming Linux

KDE and Canonical Developers Disagree Over Display Server 202

sfcrazy (1542989) writes "Robert Ancell, a Canonical software engineer, wrote a blog titled 'Why the display server doesn't matter', arguing that: 'Display servers are the component in the display stack that seems to hog a lot of the limelight. I think this is a bit of a mistake, as it’s actually probably the least important component, at least to a user.' KDE developers, who do have long experience with Qt (something Canonical is moving towards for its mobile ambitions), have refuted Bob's claims and said that display server does matter."
This discussion has been archived. No new comments can be posted.

KDE and Canonical Developers Disagree Over Display Server

Comments Filter:
  • Personal blog (Score:3, Informative)

    by Severus Snape ( 2376318 ) on Monday March 24, 2014 @01:52PM (#46565571)
    NOTHING to do with Canonical at all. Yay for the let's all hate Canonical bandwagon.
  • by slack_justyb ( 862874 ) on Monday March 24, 2014 @04:01PM (#46567045)

    The whole point about all of this, X/Wayland/MIR, is getting closer to the video card without having to yank one's hair out whilst doing it. Why would one need a little close interaction with the bare metal? If you've ever used Linux and saw tearing while moving windows around, then you've hit on one of the points for why closer to metal is a bit more ideal.

    With that said, let's not fool ourselves and think, "OMG, they just want access to the direct buffers!" That wouldn't be correct. However, developers want to have an ensured level of functionality with their applications visual appearance. If the app shows whited out menus for half a second, blink, and then there is your menu options, then there is something very wrong.

    It was pretty clear that with X, politically speaking, that developers couldn't fix a lot of the problems due to legacy and the foaming at the mouth hordes that would call said developer out for ruining their precious X. You can already see those hordes from all the "take X and my network transparency from my cold dead hands" comments. It is to a degree those people, and a few other reasons, that provided the impetus for Wayland. You just cannot fix X the way it should be fixed.

    Toolkits understand that display servers and pretty much the whole display stack in general suck. Granted there is a few moments of awesome, but they are largely out weighted by the suck factor, usually when you code an application, you'll note that sometimes you'll gravitate to the "winning" parts of the toolkit being used versus the pure suck ones. Qt has a multitude for all the OSes/Display Servers it supports. Be that Windows, Mac, X11, and so on. Likewise for GTK+ but to a lesser extent, but that is what make GTK+ a pretty cool toolkit. Because let's face it, no display stack is perfect in delivering every single developer's wish to the monitor. Likewise, no toolkit is perfect either. The GNOME and KDE people know this, they write specific code to get around some of the "weirdness" that comes with GTK+ or Qt. Obviously, that task is made slightly easier with Wayland and the way it allows a developer to send specifics to the display stack or even to the metal itself.

    Projects like KDE and GNOME have to write window managers and a lot of times those window managers have to get around some of the most sucktacular parts of the underlying display server. However, once those parts are isolated, the bulk of the work left is done in the toolkit. So display servers matter a bit to the desktop environments because they need to find all of the pitfalls of using said display server and work around them. Sometimes, it can be as simple as a patch to the toolkit or the display server upstream. Sometimes it can be as painful as a kludge that looks like it was the dream of a madman, all depends on how much upstream a patch is needed to be effective and how effective it would be for other projects all around.

    That leads into the problem with MIR. MIR seems pretty gravitated to its own means. If KDE has a problem with MIR that can be easily fixed with a patch to MIR or horribly fixed by a kludge in KDE's code base, it currently seems that the MIR team wouldn't be as happy go lucky to accept the patch if it meant that could potentially delay Ubuntu or break some future unknown to anyone else outside of MIR feature. Additionally, you have the duplicated work argument as well, which I think honestly holds a bit of water. I fondly remember the debates of aRts and Tomboy. While I think it's awesome that Ubuntu is developing their own display server, I pepper that thought with, "don't be surprised if everyone finds this whole endeavor a fools errand."

    I think the NIH argument gets tossed around way too much, like its FOSS McCarthyism. Every team has their own goals and by their very nature, that would classify them as NIH heretics. Canonical's idea is this mobile/desktop nexus of funafication, MIR helps them drive that in a way that is better suited to them. That being said

  • Re:Shh... (Score:5, Informative)

    by Eravnrekaree ( 467752 ) on Monday March 24, 2014 @04:14PM (#46567249)

    This is all wrong. X has something called GLX which allows you to do hardware accelerated OpenGL graphics. GLX allows OpenGL commands to be sent over the X protocol connection. X protocol is sent over Unix Domain Sockets when both client and server are on the same system, this uses shared memory so it is very fast, there is no latency of network transparency when X is used locally in this manner. MIT SHM also supports forms of shared memory for transmission of image data. Only when Applications when they are being used over a network, do they need to fall back to send data over TCP/IP. Given this, the benefits of having network transparency are many, but there is no downside because where an application is run locally, it can use the Unix domain sockets, the MIT SHM and DRI.

    X has also had DRI for years which has allowed an X application direct access to video hardware.

    As for support for traditional X graphics primatives, these have no negative impact on the performance of applications which do not use them and use a GLX or DRI channel instead. Its not as if hardware accelerated DRI commands have to pass through XDrawCircle, so the existance of XDrawCircle does not impact a DRI operation in any significant way. The amount of memory that this code consumes is insignificant, especially when compared to the amount used by Firefox. Maybe back in 1984 a few kilobytes was a lot of RAM, that is when many of these misconceptions started, but the fact is, these issues were generally found with any GUI that would run on 1980s hardware. People are just mindlessly repeating some myth started in the 1980s which has little relevance today. Today, X uses far less memory than Windows 8 does and the traditional graphics commands consume an insignificant amount that is not worth being worried about, and which is needed to support the multitude of X applications that still use them.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...