Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Windows Operating Systems Software Programming IT Technology

Vista's Graphics To Be Moved Out of the Kernel 555

Tiberius_Fel writes "TechWorld is running an article saying that Vista's graphics will not be in the kernel. The goal is obviously to improve reliability, alongside the plan to make most drivers run in user mode." From the article: "The shift of the UI into user mode also helps to make the UI hardware independent - and has already allowed Microsoft to release beta code of the UI to provide developers with early experience. IT also helps make it less vulnerable to kernel mode malware that could take the system down or steal data. In broader terms, this makes Windows far more like Linux and Unix - and even the MacOS - where the graphics subsystem is a separate component, rather than being hard-wired into the OS kernel."
This discussion has been archived. No new comments can be posted.

Vista's Graphics To Be Moved Out of the Kernel

Comments Filter:
  • The Bloat Divides? (Score:4, Insightful)

    by ackthpt ( 218170 ) * on Friday December 16, 2005 @02:15PM (#14273541) Homepage Journal

    So this is like cell division. The bloat of Windows divides into the Kernel and UI pools.

    Taking this article into account [slashdot.org], it seems clear why the massive graphics card requirement. However, if this much is being pulled from the Kernel, then why still such a massive minimum RAM?

    "if you hold down ctrl+shift+alt and tap the backspace you can watch a video of steve wrecking a chair"

    • Sounds like when Windows was a GUI shell on top of DOS.
    • by TykeClone ( 668449 ) * <TykeClone@gmail.com> on Friday December 16, 2005 @02:18PM (#14273576) Homepage Journal
      Graphics were not in the kernel in NT 3.51. NT 4.0 added graphics to the kernel which added instability.
      • by mmkkbb ( 816035 ) on Friday December 16, 2005 @02:29PM (#14273693) Homepage Journal
        It's funny. Microsoft already did this with printer drivers. Windows NT 3.51 lived in userspace. In Windows NT 4 they moved into the kernel. In 2000, they moved back into userspace, but with a completely different architecture from 3.51. Windows Server 2003 still supports the NT4 model of kernel mode printer graphics drivers but that might change with Vista.
        • by cmacb ( 547347 ) on Friday December 16, 2005 @02:49PM (#14273845) Homepage Journal
          I don't know about the printer drivers, but for the video, they claimed moving to kernel space made them faster (and did seem to), unfortunately it introduced an unenforced requirement that video drivers be fully debugged, which due to the nature of the business they never were. A once rock stable machine on 3.51 that could not be made stable on 4.00 without switching from ATI to Nvidia video cards is what first gave me doubts about whether I wanted to continue running Windows at home (or ATI video cards for that matter).

          The speed boost just wasn't worth it, in the same way that the functionality of run-on-load macros in Word documents aren't worth the trouble they cause. Maybe this is a sign that the true tech types are gaining influence over the marketing types at the company (but somehow I doubt that). For the sake of those still running Windows I hope they take all non-essentials out of kernel space and shoot for stability over speed or features.
          • by dgatwood ( 11270 ) on Friday December 16, 2005 @04:19PM (#14274768) Homepage Journal
            Don't confuse moving the Windows GUI to user space with moving video drivers to user space. The two are not one in the same. Even in Linux, most of the video driver bits live in the kernel. Same in Mac OS X. I'm sure the same will be true in Vista.

            Because of the nature of video, it would be impractical for video drivers to live anywhere BUT in the kernel. (See also: "microkernel".) Neither Linux nor Mac OS X puts video drivers in user space. Doing so would not be a bright idea. (I would also note that Linux and Mac OS X seem to be quite stable with ATI driver bits in their kernels.... :-)

            Drivers should be in the kernel if A. at least one of their primary clients exists in the kernel, e.g. disk controller drivers, B. they service a large number of clients directly (e.g. /dev/random), C. real-time performance is critical to the correct operation of the device (e.g. audio/video).

            Historically, video cards typically only had one client at a time. These days, the windowing system (WindowServer in Mac OS X, X11 in Linux, the Windows GUI layer) is usually the primary client, with the OS kernel being a secondary client (command-line console, panic text, boot console, etc.) Further, the graphics hardware can also be directly driven by an application for things like full-screen games. In Mac OS X, the graphics hardware is also often used for other tasks, e.g. with CoreImage. Graphics cards also depend on direct access to hardware interrupts for performance to be adequate. Moving the drivers into user space would make adequate performance for these sorts of tasks nearly impossible.

            Printers are the other extreme. They don't have their own hardware interrupts like with PCI devices, so if you're depending entirely on a faked software interrupt, the driver might as well be in user space. A printer will still print correct copy if the data arrives more slowly (up to a point, anyway). They only serve a single client (a local print spool of some sort) and cannot do more than one thing at the same time. Thus, printer drivers make no sense in the kernel.





      • Everything old is new again!

        Here is a link to an article on Microsoft's Technet discussing the benefits of moving it from userspace to kernelspace.

        http://www.microsoft.com/technet/archive/winntas/p lan/kernelwp.mspx [microsoft.com]

        Here is the overview:

        Microsoft® Windows NT® Workstation 4.0 and Windows NT Server 4.0 include a change in the implementation of Win32® graphics-related application programming interfaces. These changes are transparent to applications and users, yet they r
        • Yeah, before Linux was considered a threat by MS, performance was king and getting the most out of a 486 meant moving things like the UI into the kernel. Now that MS sees Linux as a threat stability is king.

          In fact, I'd like to see an ability in Vista Server to shut down the UI completely unless someone is actually using the system in an interactive mode.
      • by Cyberblah ( 140887 ) on Friday December 16, 2005 @03:03PM (#14273947) Homepage
        In 2013 they'll put the graphics driver back in... and shake it all about.
    • Uh...just because the code moves from the kernel into userspace doesn't mean it DISAPPEARS. It still needs RAM.

      And the massive graphics card requirment is because the new graphics system does a lot more than it used to. It actually put 3D hardware to use, among other things. Which is what we want, isn't it?

      Oh, I forgot. Microsoft can be criticized for not having a given feature, and it can ALSO be criticized for including TOO MANY features.
      • by SquadBoy ( 167263 ) on Friday December 16, 2005 @02:27PM (#14273662) Homepage Journal
        No. They get criticized for not doing features properly. My iBook with a lowly 1.33GHz proc, a mere gig of RAM, and nothing more than a ATI Mobility Radeon 9550 with 32 megs of video memory looks *stunning* and does things that from what we have seen so far Vista can only dream about.

        The simple fact is that it's possible to do great graphics, at least for a GUI, without needing a bloody supercomputer (Yes yes yes I *know*. I'm overstating for effect). Basically if they did these things properly they would see a lot of the hating go away.
  • by ejoe_mac ( 560743 ) on Friday December 16, 2005 @02:16PM (#14273554)
    Who needs the overhead of a windowing GUI on a server?
    • by ZeroExistenZ ( 721849 ) on Friday December 16, 2005 @02:22PM (#14273615)
      Who needs the overhead of a windowing GUI on a server?
      Windows(tm) administrators...
    • by whodunnit ( 238223 ) on Friday December 16, 2005 @02:25PM (#14273641)
      You mean turning off the monitor doesnt do that?!?!?
    • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Friday December 16, 2005 @02:25PM (#14273647) Homepage
      Actually, testers have been given something kinda like that. It's called Windows Server Core, and it boots up with just a console window open - no start menu, desktop, configuration dialogs, or anything else like that.

      Unfortunately it doesn't come with IIS which is a real disappointment though its developers have shown interest in adding additional services.
    • What overhead? It's not like the cycles are being spent if you're not doing anything with said GUI.
      My problem is that you can't buy a motherboard without on-board video (for say, a 1U server) for less than one with it. WTF? I would much rather have a server with good bios support for headlessness that doesn't supply _power_ to the graphics chip, than an OS that can ignore the fact that the chip is there.
    • Command-line-only Windows?

      Redundant?
      • Redundant?
        ...or oxymoron [wikipedia.org]
      • by cmacb ( 547347 ) on Friday December 16, 2005 @03:03PM (#14273943) Homepage Journal
        No, this is a classic example of an oxymoron (contradiction in terms).

        Whereas I am an example of an ordinary moron.

        I worked at a very large world-wide shop that saves a whole cycle of hardware upgrades by turning off the screen savers on their servers. Most of the admins were running the fanciest 3D CPU intensive screensavers they could find. When anyone would complain about performance they would go to the server, check task manager and come back with: "well it's only running at 20%". Finally someone thought to check the numbers remotely and discovered that the screensaver was by far the biggest hog. I don't think most Windows users, even the "pros" realize how much resource is involved in something as simple as moving the mouse, moving a window around or resizing it.

        They made Windows so "easy" that even an idiot could administer it and...

        Oh, never-mind.
    • by nmb3000 ( 741169 ) on Friday December 16, 2005 @02:30PM (#14273700) Journal
      Who needs the overhead of a windowing GUI on a server?

      Ah, yes. Just what we all want. Command-line administration of Active Directory and Exchange.

      Windows Server 2003's GUI overhead is extremely small in comparison to the other tasks it's performing. Besides, it's not a matter of being "scared" of a CLI, in fact pretty much all the Windows sysadmins I know (including myself) use the Windows command line on a regular basis. Believe it or not, but a GUI really can give a boost to speed and efficiency when it comes to server management, regardless of what the zealots here might say.
      • That doesn't stop you from using a graphical remote administration client, look at how many there are for other stuff like dbms's. Sure it's harder to get secure, but if you are administering windows remotely, you probably have to go through that already.
      • by Necrotica ( 241109 ) <cspencerNO@SPAMlanlord.ca> on Friday December 16, 2005 @02:38PM (#14273767)
        Ah, yes. Just what we all want. Command-line administration of Active Directory and Exchange.

        Never used or seen Netware or used any UNIX, have you?

        There is no NEED for a GUI on the server. Keep the admin tools on the client! If you can't administer AD from your client, restart the AD Admin Service on the service.

        Admins should only physically touch servers when there is a hardware problem or network problem. If you are sitting on the console of your server using the GUI, I would suggest that you are not a very experienced sysadmin.
    • As I was thinking about this, I realized that this is like MS-DOS on steroids. I know this analogy is not entirely correct, but wasn't the point of Win9x that it put the gui INTO the kernel?

       
      • As I was thinking about this, I realized that this is like MS-DOS on steroids.

        Well yeah, in the same sense that Unix is DOS on steroids.

        I know this analogy is not entirely correct, but wasn't the point of Win9x that it put the gui INTO the kernel?

        No. The point of Win9x was to look like Mac OS. Moving the GUI into the kernel was a poorly thought out premature optimization. Microsoft is doing the right thing by changing that.
        • Actually, you are leaving out some important details. First of all, we're not talking about Win9x, which bounces between real and protected mode so that it can execute 16 bit code, which is pretty abundant in Win9x itself, let alone anything else you might run. You don't have to be in the kernel to destroy kernel memory when you're in real mode.

          In the NT world, however, the Kernel and GDI spaces were merged when NT got the Windows 95 shell, in NT 4.0. This was very unfortunate because as many (or perhap

  • by digitalgimpus ( 468277 ) on Friday December 16, 2005 @02:17PM (#14273565) Homepage
    You know when they market this you'll see it as

    New! - Microsoft's Exclusive Patented Technology allows for graphics outside the kernel, to provide higher stability.

    New! - Microsoft's Revolutionary Technology allows for graphics outside the kernel, to provide higher stability.

    Just wait.... they'll make it sound like a new concept. Rather than a copycat.
    • Don't laugh, the patent applications get filed tommorow.
    • Uh, didn't early versions of NT run drivers in a separate protection ring to improve stability? And didn't Microsoft abandon this scheme because it was incredibly slow, and throw everything into the same protection ring to improve performance? So now they are going back to a scheme they themselves previously abondoned because of poor performance, and calling it an "innovation"?!?

    • In broader terms, this makes Windows far more like Linux and Unix - and even the MacOS - where the graphics subsystem is a separate component, rather than being hard-wired into the OS kernel.

      I know it makes you all hip and tres cool to bash Microsoft, but they actually had this design wa-a-a-y back in NT 3.5/3.51. That would be in the mid/late 1990s for you youngsters in the audience. They made the change to the current model in NT 4.0.

      • So they're copying something from NT 3.51 and marking it as a new feature.

        It's still a copycat.
      • Just because they had it in Windows NT 3.5/3.51 does not mean they didn't copy the concept then as well. In fact it was no more original for them to introduce it then as it is unoriginal to introduce this today.

        With that said, most of the "bashing" towards MS won't be against them for doing this, since it is necessary and so obvious it had to be done it hurts, but rather against how they market these changes and their disregard for others ideas. Ironically, this is why they have the market share they do tod
      • Yes, it was copycat even wa-a-a-y back in NT 3.5; X11 had this architecture in the mid 1980's.
        • Re:YES a COPYCAT (Score:3, Insightful)

          by Glonk ( 103787 )
          I don't understand what makes a copycat these days. How is something basics like running graphics in usermode something where people can be called "copycats" for doing? You can run it in kernelmode or usermode, it's not like switching it from one to the other is an incredible innovation and people would never come to on their own. Clearly they looked at X11 and thought "what a magnificient technology! Let us copy its architecture..." or not...

          There are design tradeoffs made in doing operating system des
      • by oGMo ( 379 ) on Friday December 16, 2005 @02:38PM (#14273764)
        I know it makes you all hip and tres cool to bash Microsoft, but they actually had this design wa-a-a-y back in NT 3.5/3.51. That would be in the mid/late 1990s for you youngsters in the audience. They made the change to the current model in NT 4.0.

        Yeah well, where the drivers reside aside, is the OS still based on the assumption it's a GUI? Specifically, do we still have the idiotic and juvenile system architecture that specifies window parameters to low-level system calls? Like say, CreateProcess taking window parameters [microsoft.com]?

        Or have they actually revamped the kernel no longer requires or assumes a GUI at all? Have they finally caught up to 1970?

    • by NCraig ( 773500 ) on Friday December 16, 2005 @02:38PM (#14273766)
      This is just priceless.

      Day in and day out, Microsoft takes a beating around here for putting too many irrelevant subsystems into their kernel.

      And then, when Microsoft makes a positive design change, they are attacked for HYPOTHETICAL marketing. You don't know how (or if) they'll market this.

      I can see it now: Bill Gates shows up at your front door, hands you a million dollars, and walks away. You run to your computer and submit the headline, "BILL GATES IS A TRESSPASSER."
      • Didn't you hear? Microsoft bashing is guaranteed karma, man.

        Why would someone need to think of something original when they can just keep recycling the same old jokes over and over?

        I'm no MS fanboy myself, considering some of the mistakes they've made in the past. However, I'm disappointed with what passes for humor here sometimes.
  • took a while.... (Score:3, Interesting)

    by chewy_fruit_loop ( 320844 ) on Friday December 16, 2005 @02:17PM (#14273567) Homepage
    its taken them a bit to see they where wrong when they put them in kernel space
    but didn't they do this on nt(4 i believe) because it was to slow otherwise?

    mind you with the specs needed for a vista machine, whos going to notice......
    • I believe it was 3.5x when it got moved. Not 100% sure though, it's been a long long time since I've seen a 3.5x or 4.x system. I think it was external in NT3.1, but I'm not even sure about that. Just that the graphics system on 3.1 was as slow as you'd ever seen.
  • by isecore ( 132059 ) <isecore&isecore,net> on Friday December 16, 2005 @02:18PM (#14273573) Homepage
    IT also helps make it less vulnerable to kernel mode malware that could take the system down or steal data.

    And it also helps with all the stupid DRM that the MPAA/RIAA wants to force down our throats! Yay, when I wanna watch DVDs on my computer in the future I have to get a new OS, new monitor, new graphics card. Thank you for that innovation!
    • huh? (Score:4, Interesting)

      by phorm ( 591458 ) on Friday December 16, 2005 @03:47PM (#14274349) Journal
      It seems to me that moving graphical operations to userland would make them more hackable rather than more secure. One userland app could more easily preempt another userland app, and something kernel-loaded could be used to trick a userland app into ignoring copy protection.

      Also, I believe that a userland application might be a little easier to decipher, and you wouldn't need to know as much about the hidden tricks that the windows kernel might be using (or you could intercept the various calls).
  • Open GL Drivers? (Score:5, Insightful)

    by Gr8Apes ( 679165 ) on Friday December 16, 2005 @02:18PM (#14273575)
    So, does this mean that MS's stated goal of "deprecating" OpenGL in favor of DirectX is now irrelevant? If the graphics subsystem is outside the kernel, it can be replaced by another driver that does not make OpenGL play second fiddle to DirectX. Perhaps this is a good thing?
    • Re:Open GL Drivers? (Score:5, Informative)

      by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Friday December 16, 2005 @02:31PM (#14273714) Homepage
      Wrapping OpenGL does suck, but they are also wrapping Direct3D 9 and lower. So it's more than just Carmack's games that won't run at top speed :(

      I havn't seen any clear stance on if they will allow hardware vendors to implement their own ICDs for fullscreen mode, but the current LDDM beta drivers from nVidia do not have OpenGL in them.
  • Steal (Score:3, Funny)

    by Anonymous Coward on Friday December 16, 2005 @02:18PM (#14273578)

    IT also helps make it less vulnerable to kernel mode malware that could take the system down or steal data.

    You mean copyright infringe data! The data's not going anywhere.

    For a site that complains about this whenever it comes up, get it right!
  • So, correct me if I'm wrong, but will the movement of the UI into user mode allow one to tailor the environment according to the user's preference as opposed to just the developer's presence, harkening back to the days of uwm only or Microsoft?
  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Friday December 16, 2005 @02:19PM (#14273585) Homepage
    the biggest mistake MS made was to listen to the marketing droids
    (Windows 95 ist faster! Nein!) and to move the video drivers into
    kernelspace in NT 4.0.

    to do that, they had to rip out the entire terminal server subsystem,
    to the extent that in order to fix it for NT 4.0 and NT 5.0 (aka Windows 2000) they had to _buy_ a company that had managed to do it (Citrix, i think it was - someone correct me, here).

    NT 3.5 and 3.51, the screen driver, being userspace, could crash - and leave the machine, as a server, completely unaffected. If you _did_ need to use the screen, as long as you knew what keys to press, or where to move the mouse.... :) but if it was a Terminal Server - WHO CARED! keep it running!

    Now - surprise, surprise, hardware is fast enough, memory is cheap enough, the [stupid] decision has been revisited.
    • Yep, Microsoft is crying about that decision, alright, as it lies awake at night, bitterly depressed on its big bed made of solid gold padded with cash.
    • by Anonymous Coward on Friday December 16, 2005 @02:26PM (#14273651)
      MS hasn't bought Citrix. They do, however, license technology from them to provide Terminal Services/Remote Desktop.
    • by Anonymous Coward on Friday December 16, 2005 @02:36PM (#14273756)
      No. This is not true and represents a misunderstandings about how the Win32 API is implemented in NT. For legacy reasons many windows programs would use the GUI APIs for internal IPC (why oh why wasn't LPC exposed to userspace though?). Anyhow, this mean that the Win32 subsystem server (CSRSS) ran both the GUI and the rest of Win32.

      So a crash in the GUI (running inside the context of CSRSS) would result in all Win32 apps being shutdown. Perhaps the file services (part of SRV.SYS) would remain in the event of a GUI crash but any applications running under Win32 context would be lost. That was the reasoning that allowed M$ to temper DaveC's fears and move the GUI to WIN32K.SYS in NT 4.0.

      I'm not defending the approach. I disagree with the GUI-in-kernelspace idea as well. I'm merely pointing out the way things went in terms of history. Ideally the GUI services and kernel services would be separate APIs in Win32 so that server and console applications could live without the GUI. But compatability was a major goal...

      Personally, I would love to ditch the Windows GUI but keep the NT kernel. The NT kernel (despite the typical conditioned response of the average slashdotter) is quite good in many areas. The GUI API of Windows was inferior to OS/2's Presentation Manager (the big change being client area -> client window). Too bad OS/2 PM can't be run under the NT kernel. Oh well, it almost happened...
       
  • BSOD (Score:4, Funny)

    by DiGG3r ( 824623 ) on Friday December 16, 2005 @02:22PM (#14273612)
    Does this mean we can customize our own BSOD?
  • I wonder if there will be some other design problems we can laugh at instead.
  • by dada21 ( 163177 ) * <adam.dada@gmail.com> on Friday December 16, 2005 @02:26PM (#14273652) Homepage Journal
    Microsoft programmers found this solution by modifying a secret Vista file called WIN.INI with the following line:

    shell=command.com

    Then, they added the GUI in another secret Vista file called AUTOEXEC.BAt containing one line:

    win.com
  • Obligitory: (Score:4, Insightful)

    by mrwiggly ( 34597 ) on Friday December 16, 2005 @02:30PM (#14273710)
    Those who fail to understand UNIX are doomed to reimplement it. Poorly
  • Apple and Microsoft (Score:5, Interesting)

    by penguin-collective ( 932038 ) on Friday December 16, 2005 @02:31PM (#14273715)
    X11 was conceived 20 years ago and was an incredibly forward looking design; both Macintosh and Windows have now moved to an architecture very similar to it.

    Unfortunately, technical and historical facts won't stop people from making bogus claims about their pet architecture. There are still lots of Mac zealots going around complaining about X11's supposedly inefficient "network transparent architecture" even though the Mac has pretty much the same architecture and is, if anything, less efficient. I imagine it will be the same with Microsoft zealots, although many of them will, in addition, claim that this architecture was invented by Microsoft.
    • There are still lots of Mac zealots going around complaining about X11's supposedly inefficient "network transparent architecture"

      That's funny - the Mac zealots I talk to are going around complaining about Starbuck's supposedly inefficient "vanilla latte foaming technique".

      (Ya, I am a Mac zealot... busted. I have X11 installed as well, came with Tiger.)

  • by Via_Patrino ( 702161 ) on Friday December 16, 2005 @02:32PM (#14273726)
    In other words OpenGl will suck, because DirectX will have direct access to the kernel while OpenGl (and other graphics APIs) will be delayed by inumerous error checks by the interface.
  • by tjstork ( 137384 ) <todd@bandrowsky.gmail@com> on Friday December 16, 2005 @02:47PM (#14273828) Homepage Journal
    In Windows 3.5, the graphics subsystem was outside of the kernal, then they moved it in for 4.0, and now, they are undoing that.
  • by bored ( 40072 ) on Friday December 16, 2005 @03:01PM (#14273932)
    Repeat after me, the GDI is not part of the kernel, it simply runs in kernel space (AKA higher privledge). Unlike, linux where everything in kernel space is basically compiled against the kernel headers and is bound to a kernel version, NT has both user and kernel mode API's. To say the graphics system is hard wired to the kernel is like saying my hello-world program is part of libc. Moving it back to userspace should be about as hard as it was back in the NT 3.51 timeframe to move it into kernel space.

  • Nothing's changed (Score:5, Informative)

    by 511pf ( 685691 ) on Friday December 16, 2005 @03:05PM (#14273959)
    Microsoft has already responded to this article by saying that nothing has changed: http://www.microsoft-watch.com/article2/0,2180,190 2540,00.asp [microsoft-watch.com]
  • Vista... (Score:4, Interesting)

    by HawkingMattress ( 588824 ) on Friday December 16, 2005 @03:19PM (#14274074)
    Despite the general feeling here, i'm starting to be really interested in vista...
    It seems they have fixed almost everything that was wrong with windows. I mean:
    • Explorer rewriten from scratch. This was long, long overdue and that alone would make me interested in Vista. Explorer makes windows looks buggy sometimes but it's only explorer.exe which sucks...
    • Monad. A real shell, which could possibly be much more powerfull than say bash+ standard Unix commands (or cygwin...)
    • They're moving the graphics subsystems and all the bloody drivers in userland. That means it will be dead stable, period. 2000 and Xp are already at least as stable as Linux, and maybe more. After that i'm sorry but Linux will compare the same way to Windows that Win95 did to Linux...
    • A hardware accelerated graphic system, ala Quartz. It should rock even if They'll probably make it look and act totally stupid out of the box, overusing their new power...

    And people complain that there is nothing new in Vista, phew... I mean if they manage to do all those things, and do them the right way like they seem to be decided to (for once...) it will be damn worth a new release...
    And no, i'm not a microsoft fanboy, i've been using Linux since 97 and I really like it where it shines. But if you have even a little objectivity you can't say the stuff they're putting here is not interesting...

  • Now if... (Score:4, Funny)

    by Eric Damron ( 553630 ) on Friday December 16, 2005 @03:23PM (#14274102)
    "... this makes Windows far more like Linux and Unix - and even the MacOS - where the graphics subsystem is a separate component, rather than being hard-wired into the OS kernel."

    Now if Microsoft could just find a way to separate the internet browser from the OS...

    ** cough, choke, gag...**
  • by TheZorch ( 925979 ) <thezorch.gmail@com> on Friday December 16, 2005 @03:32PM (#14274160) Homepage
    Wrappers for other graphics protocols have been around for a long time. You can still get Glide Wrappers for games that specifically require a 3Dfx Interactive Voodoo graphics card. Most of the newest wrappers work great. eVoodoo for instance is one of the best.

    What wrappers do, in "Windows", is take the function calls ment for Glide (or whatever graphics subsystem the program needs) and translates them into function calls that DirectX can understand. I've heard of Glide wrappers for Linux that translates into OpenGL.

    Anyway, DirectX in Vista will have something like a wrapper for OpenGL since there will not be any actual OpenGL drivers in the OS. This could be good or bad but the move does make sense. Instead of having two separate graphics subsystems in Vista they are narrowing it down to just one and keeping the ability to use programs that requires OpenGL. Most game developers have left OpenGL far behind anyway including Id Software a company that used OpenGL almost exclusively for years until Doom 3 and Quake 4 arrived which use DirectX. It wouldn't be too hard to add in OpenGL Optimization into the wrapper code so programs that use OpenGL won't suffer a performance hit. I cna also understand why Vista will need high graphics and memory requirements. The whole reason why the GUI was put into the kernel for NT 4.0 was for improved speed, but at the loss of stability. Taking it out again will improve stability, something that Windows needs badly. Todays faster CPUs and graphics card GPUs shouldn't really have a problem with Vista. Builtin video on motherboards usually aren't that good, but this move might convince manufaturers to start offering builtin video that is much better quality or switch to using standard video cards instead which is what they should have been doing in the first place.
  • by 192939495969798999 ( 58312 ) <info@de[ ]moore.com ['vin' in gap]> on Friday December 16, 2005 @03:35PM (#14274186) Homepage Journal
    Microsoft has decided to rename this new, kernel-distant graphics interface "X". Fringe groups note that this name is already taken, but Microsoft is in talks to buy said fringe groups.
  • missing one (Score:5, Funny)

    by 955301 ( 209856 ) on Friday December 16, 2005 @03:36PM (#14274209) Journal
    In broader terms, this makes Windows far more like Linux and Unix - and even the MacOS -

    or DOS and windows 3.1.

    *ducks*
  • by IdleTime ( 561841 ) on Friday December 16, 2005 @03:40PM (#14274257) Journal
    If the GUI is no longer part of the kernel, can I now get a Vista Server without the GUI? Just barebones Vista with the new command line shell?
  • by takis ( 14451 ) on Friday December 16, 2005 @04:19PM (#14274772) Homepage Journal
    TechWorld is running an article saying that Vista's graphics will not be in the kernel. The goal is obviously to improve reliability, alongside the plan to make most drivers run in user mode." ... In broader terms, this makes Windows far more like Linux and Unix - and even the MacOS - where the graphics subsystem is a separate component, rather than being hard-wired into the OS kernel."


    Yeah, running graphics drivers in kernel space is just plain ugly... Luckily for us Linux users, we can get full graphics acceleration by running the "userspace" NVIDIA kernel module ;-) Certainly increases stability!

    size /lib/modules/2.6.12-10-k7/volatile/nvidia.ko
          text data bss dec hex filename
    2476901 947920 6916 3431737 345d39

  • by RhettLivingston ( 544140 ) on Friday December 16, 2005 @05:10PM (#14275432) Journal
    is because enough high level graphics functionality is moving into and being required of the graphics hardware now that the performance loss on most machines will be acceptable. i.e. it is heavily tied to the hardware requirements Vista has added. If they couldn't do it without losing significant performance, they wouldn't. Performance sells before stability.

The gent who wakes up and finds himself a success hasn't been asleep.

Working...