Forgot your password?
typodupeerror
Graphics Software

OpenGL 1.3 Spec Released 193

Posted by CmdrTaco
from the for-the-graphics-guru-and-the-wannabe dept.
JigSaw writes "The OpenGL Architecture Review Board announced the new OpenGL 1.3 specification (1.8 MB pdf). In OpenGL 1.3, several additional features and functions have been ratified and brought into the API's core functionality. New features include Cube map texturing, multisampling, new texture modes that provide more powerful ways of applying textures to rendered objects, compressed texture framework etc. Let's all hope that GL can catch up with Direct3D now, as with the latest DirectX8, Direct3D has done some big steps towards feature-set, speed and even non-bloatiness when it comes to coding for it, while OpenGL 1.2 was released more than 2 years ago and it did not offer as much."
This discussion has been archived. No new comments can be posted.

OpenGL 1.3 Spec Released

Comments Filter:
  • I think what the OpenGL ARB are doing is just right: they're introducing as part of the OpenGL standard features that the current crop of hardware vendors can support. In the case of texture shaders, I've only heard of two companies (ATI and NVidia) supporting it in their hardware. Come the time when more vendors support it, I'm sure it'll find its way in the specs.

    Contrast this with D3D in DX8. If the hardware doesn't support acceleration of this feature, would it do it in software? If it did, would users want it? Is there a way to choose that if a feature is implemented only in software, that it not be used at all?
    • Contrast this with D3D in DX8. If the hardware doesn't support acceleration of this feature, would it do it in software? If it did, would users want it? Is there a way to choose that if a feature is implemented only in software, that it not be used at all?

      It depends; yes, you can tell it not to do it if you want, programmatically. But ultimately, this is why games have render engine feature options screens. So you can turn off stuff your system can't handle, or handles badly.

      Of course, you have to design for it - and this is NOT a problem that is DirectX specific - the same issues also apply to OpenGl.

      Simon
  • by Anonymous Coward
    Developers now have a card vendor neutral way to access programmable shaders (pixel and vertex shaders) from DX8. But does OpenGL1.3 have anything comparable, or do we have to resort to NVidia or ATI extensions? If that is the case, OpenGL will be hard hit unless a standard vendor neutral extension it added soon.
  • Does anyone know what John Carmack thinks of this whole deal? Thanks in advance
  • Catch up to what? (Score:5, Insightful)

    by Mongoose (8480) on Wednesday August 15, 2001 @04:48PM (#2116483) Homepage
    GL is modular and relies on extentions. This produces a far more stable API and allows for the latest bleeding edge tech.

    Comparing DX or better D3D to GL is like comparing UNIX to Windows. You can either allow modular ententions or rewrite the API every release, whus breaking backwards compatibility for no reason. GL ext from ATI and Nvidia are much easier to use for development that D3D imho.

    Only moogles may disagree. We still love you dan! =)
    • If you want to use code written for a previous version of DirectX with new releases of the SDK, all you have to do is #define what version of the API you want to write to. Using COM the DLL also supports previous versions of the interface, giving backward compatibility. (At least, that's my understanding of it.)

      • I was also hinting at the problem with closed source DX games. If you run DX 8.0 you won't be able to play older DX games. If you bought every version of TombRaider ( dear god, why? ), and installed DX 8.0 you would be able to play only the last version.

        I work on OGL clones of DX games as a hobby. Once I have my OGL based engine under the game, then I don't have to worry about newer API version breaking my binaries much less source.

        I for one still play old DOS games and I enjoy having a 'long shelf life' in the titles I buy. I no longer buy DX based games even if I really want them now. If MarrowWind doesn't move off DX ( xbox/"pc" game ) I will never buy it. I waited years for MarrowWind too...
        • Umm what ?? I have games that ran on DX5 and the run just fine under DX8. I think you may have misunderstood what someone said. In my experience as a die hard gamer DX is VERY backward compatible. Not to say that OPENGL is not good, I use OPENGL for some games and DX8 for others. It really depends on the individual performance. As a consumer MORE CHOICES IS GOOD.
        • Um, hey I hate MS as much as the next guy, but this is straight out wrong. DX is binary backwards compatible [and probably source, but I wouldn't know, I don't make DX programs]. The software does some sort of COM QueryInterface() call to get an interface to the API that it expects. A DX2 game get IDirectDraw2, etc. [i think] If what you were saying were true you'd here millions of angry gamers torching Microsoft. Any DX2 or higher game should work fine with DirectX 8. (I suppose DX1 aka Game SDK should work too but I wouldn't be on it, not that anyone ever really used it anyhow.)
  • Apple (Score:3, Insightful)

    by mr100percent (57156) on Wednesday August 15, 2001 @04:57PM (#2118089) Homepage Journal
    Microsoft will support it eventually in a year, as they want DirectX more. Apple will probably have it on MacOS X by the end of the year, while Linux will be a somewhere in between.

  • by briggsb (217215) on Wednesday August 15, 2001 @04:45PM (#2122345)
    Until the specs support this kind [bbspot.com] of functionality.
  • The funny bit... (Score:2, Insightful)

    by Papa Legba (192550)
    If history is any judge in 10 years from now we will not be able to believe that we watched such crappy specs and liked them.

    Gamer 1 " Good god this quake 3 is SUCH 24 bit color, how could they stand it?"
    Gamer 2 "Totally!"

    • by keesh (202812)
      More likely to be resolution... I for one can't tell 24bit colour from anything higher, but I sure can tell the difference between 640x480 and 1600x1200...
      • Re:The funny bit... (Score:3, Interesting)

        by Lussarn (105276)
        I for one can't tell 24bit colour from anything higher

        When it comes to 3D you and everybody else can, thats because 24 bit color only has 8bit integer precision/color. With todays games evry pixel on the screen is rendered from many textures, lightmaps, bumpmaps etc. This gives errors when there only is 8 bit precisions.

        John Carmack (Id software) have stated that more precicion is needed on future GFX cards.

        Consider this
        In floatingpoint math.
        120/50*30/2 = 36
        In integer math that answer would be 30 (calcing from left to right)

        From what I understand the cards nowadays uses integer math.

      • Actually, in the case of 32bit color, the 8 added
        bits aren't actually used for anything else than padding. Atleast when just running a desktop.
        • Re:The funny bit... (Score:2, Informative)

          by YeeHarr (187241)
          Actually the extra 8 bits are used for alpha.

          Alpha is one way to do the smoke/fog effects.

          Alpha is the transparency of a material/texture.

          • That is somewhat true. However, when I set my desktop to 32bit color, where are these alpha chanels being used? Maybe if windows or linux supported truly transparent windows we would be using 32bit color, but for the most part those extra 8bits might as well be padding.

            I heard somewhere that Photoshop can use a 32bit display to accelerate redraws of multilayer images (letting the hardware sort out the transparency). It would be possible using say OpenGL, but I'm not that windows will do this, even in 32bit mode. I really don't have any clue about whether or not the mac would.
          • Yes, but the color space of the framebuffer is still 8 bits of red, green and blue. 24 bits.
        • Depends on the encoding scheme.

          The Cineon format sometimes used in digital film work (compositing, etc) uses 3 10-bit channels using a logarithmic scale. Depends what your ultimate display medium will be.
      • No one can. (Score:3, Informative)

        by BradleyUffner (103496)
        Ummm. no one can. 32 bit color and 24 bit color both use 8 bits for red, green, and blue. The extra 8 bits are used for alphablening or just to align the color to something the computer can copy faster. With 3d cards the bit depth is important because of the way colors are combined with textures and all kinds of funky stuff, but after all the rendering is done 24 bit and 32 bit are exactly the same.
        • Tell you what. Take about 5 or 6 images of the same size 32 bit RGBA, keep them fairly simple. Now mathematically blend them together in various ways, such as multiplying, adding, etc. Now look at the result. This causes an effect known as "banding" pretty easily.

          Take those same graphics and convert each channel to a float between 0 and 1 and do your blends like that, clamping at 1.0 and 0.0 if necessary. When you convert back to 32 bit color, the image will probably not show the artifacts found in the simply blended version. You can achieve the same results by using a higher precision framebuffer in a 3D card, be it 64 or 128 as some people are suggesting.

          While the human eye is only capible of seeing about 10 million colors (I think that's the right number?), 16.7 million plus an alpha channel isn't enough when you do too many blends simply because each blend lowers your precision.

          • But what I was trying to say is that when you a 32bit bitmap, and a 24bit bitmap, and you display them on a display set to 32 bits you will see the exact same image as if you displayed them on one set to 24 bits. Of course when you are doing image blending you ned more depth for each color, but that isn't really what's ment by 24-bit color and 32-bit color, that's just how the math is done. The bit depth of 24 or 32 bit colors means nothing to how the image is actually displayed in teh end. just as to how it's calculated if you do blending and other stuff on the way to the screen.
        • Ummm. no one can. 32 bit color and 24 bit color both use 8 bits for red, green, and blue.

          Well, and even if 32 bit used more bits for each color, 24 bit already stores more colors then us mere humans are able to see (typically).
      • yeah, my mother-in-law likes to play freecell on her computer at 640X480 on the 17 inch monitor that i bought her
      • OK, in profession circles, we don't say 24bit, we say 8bit (ie, 8bits per chanel, 3 chanels).

        Anyway, if we can't tell the difference between 8bit color and higher color depths, then way to so many professional video rigs record and play back in 10 bit color, and why was Shrek recorded to film in 16bit color?

        The answer is that when an image emphasizes one color over another, banding can occur. Also, it isn't hard to find instances where detail was compromised by contrast in digitial images. Looking around on the sections about digital cameras on www.photo.net will discuss this issue somewhat.
    • So, you mean all gamers are going to become valley girls in the future?
  • Hmm. (Score:2, Redundant)

    by citizenc (60589)
    2001-08-14 19:23:44 OpenGL 1.3 Specifications Released (articles,graphics) (accepted)
    *Twitch*
  • And the reason ? It's the API of choice for John Carmack, the main programmer of Quake, Quake 2 and Quake 3: Arena. YES, you can find DirectInput code in Quake3, but can you find a SINGLE LINE of D3D code in ANY of games coming out of ID Software ? No chance in hell.

    Then, there's this very nice company called EpicGames. It created Unreal and Unreal Tournament (while trying to push Glide) and are now doing Unreal Warfare. These guys provide nice competition to ID Software and YES, they use Direct3D. Now take a modern computer with an NVIDIA card (chances are you already have one anyways) and play some Quake2 and Quake3...See the framerates ? OK... Now start up Unreal/UT, select D3D as the renderer and...do I really have to tell you how low will your FPS go ?

    Start-up Half-Life, the most popular online 3D FPS game at the moment (due to CS), try switching back and forth between the OpenGL and D3D renderers and compare the framerates. I know some of you are going to scream that HL is based on the Quake engine, etc, but just to let you know, only 20% of the HL engine code come from Quake.
    • Then, there's this very nice company called EpicGames. It created Unreal and Unreal Tournament (while trying to push Glide) and are now doing Unreal Warfare. These guys provide nice competition to ID Software and YES, they use Direct3D. Now take a modern computer with an NVIDIA card (chances are you already have one anyways) and play some Quake2 and Quake3...See the framerates ? OK... Now start up Unreal/UT, select D3D as the renderer and...do I really have to tell you how low will your FPS go ?

      The fact is, the renderer in UT produces much better looking results than in Quake, and is designed for larger maps too. It also handles mirrors, etc. much better. It even has procedural texturing built in. So this isn't a valid comparison; UT runs slower because it does MORE. (And looks better for it)

      Start-up Half-Life, the most popular online 3D FPS game at the moment (due to CS), try switching back and forth between the OpenGL and D3D renderers and compare the framerates. I know some of you are going to scream that HL is based on the Quake engine, etc, but just to let you know, only 20% of the HL engine code come from Quake.

      Clue; most of that 20% is the RENDERING CODE, which is still largely OpenGL based. They have a wrapper layer between OpenGL and DirectX for the DirectX output. That's where the slowdown comes in. (For example, surfaces don't have to be decomposed into triangles in OpenGl; in DirectX they have to be... and in HalfLife, none of the surfaces are decomposed into triangles by preprocessing the data - which is why it's slower; OpenGL drivers are optimized for this kind of work... but they're doing the conversion themselves).

      Simon
    • Then, there's this very nice company called EpicGames. It created Unreal and Unreal Tournament (while trying to push Glide) and are now doing Unreal Warfare. These guys provide nice competition to ID Software and YES, they use Direct3D. Now take a modern computer with an NVIDIA card (chances are you already have one anyways) and play some Quake2 and Quake3...See the framerates ? OK... Now start up Unreal/UT, select D3D as the renderer and...do I really have to tell you how low will your FPS go ?

      I really wish that people like you would stop talking about things you know nothing about. Unreal Tournament is limited by the CPU, not the graphics card because it uses a slow visibility-determination-scheme that favorizes its software renderer. Remember that UT is based on Unreal which was in development long before 3D Hardware came about.

      It's NOT slow because of D3D, as the next version of the engine (where this issue is fixed) will prove.

      Even Carmack will tell you that.

      • Unreal Tournament is limited by the CPU, not the graphics card...

        BS. Every 3D game out there nowadays is limited by both. Lots of RAM will help you run UT better, so will a graphics card, the CPU comes 3rd. When it comes to many other games, both the CPU and the graphics card are equally important. I am yet to see a modern game that actually relies more on the CPU then on the video card, this doesn't mean that the CPU can't be a bottleneck though.

  • Implementation (Score:4, Insightful)

    by Midnight Thunder (17205) on Wednesday August 15, 2001 @05:00PM (#2133200) Homepage Journal
    Now that we have the specs, how long before we can expect implementations that actually take advantage of them?

    As to this issue of Direct 3D having a bigger feature set et al., this is only a worthy argument if we are talking MS-Windows. Outside of the Windows platform Direct 3D means nothing, since it isn't available there. OpenGL is currently the only cross-platform solution worth mentioning (please correct me if there is another). IMHO, the SDL game API made the right move in using OGL for it graphics, since the last thing we need is yet another graphics API that is just about supported. Maybe one thing that will help OGL, especially in games, is if more noise was made about it.
    • Now that we have the specs, how long before we can expect implementations that actually take advantage of them?

      They already do [nvidia.com]. ;)
    • You apparently are missing the fine points of the term "cross-platform" as found in the Microsoft dictionary. In this context, "cross-platform" means both Win9X family and WinNT family. From what I understand, Win2k now ships with DirectX, so that makes DirectX "cross-platform".

      Now that the Win9X line is scheduled to wither, and the WinNT line is to be the One True Solution with the release of WinXP, the term "cross platform" becomes irrelevant, just like "Office" without the "MS " prefix. Of course at some point in the future, "cross-platform" may need to be resurrected, with the release of the X-Box or WinCE-9.3.

      On a slightly more serious vein...

      The issue of "implementations that actually take advantage" crossed with OpenGL extensions that may differ from vendor to vendor is a bit of a red herring. After all, not all cards are going to have these new whiz-bang features. Someone enlighten me if Direct3D still has the accursed "capability bits" that are under-architected for telling true capabilities, and may leave a game falling back on software rendering without warning - unless the game is 'written to the cards' in the first place.

      IMHO, the newest games will be written to a reasonable common denominator, then with a few extra modes, first to a "better common denominator", and finally to the full feature set of a few of the newest cards. This isn't a "write once, run on any Direct3D with the BEST eye-candy" situation by any means.
  • why do they use PDF? its impossible to read on screen, and printing pollutes the environment!

    .:[ Stop using PDF! ]:.

    Use HTML or something you ppl!

    Slogan: Stop nVidia from blocking 3dfx completely!

  • by stikves (127823) on Wednesday August 15, 2001 @05:14PM (#2138612) Homepage
    Some people claim the death of opengl, while others want the community to keep it alive.

    But the "evil" API Direct3D is already (mostly) available for Linux. Haven't you heard of trasgaming (http://www.transgaming.com/ [transgaming.com])?

    They are currenly working on D3D port to WINE [wineqh.com].

    (If you don't know, their license is not fully "free", but they will make it "free" when they get enough "support".)

    [ By the way, I don't think opengl will die anytime soon. Because "serious" graphics work is not only "games". have you used SGI? they do not support D3D or whatsoever ]

    • Take f.e. 3DStudio MAX. It has already a d3d renderer. Looking at the api specs, you don't need OpenGL to do the stuff you want, since D3d offers you the same stuff, and at the api level, not with crappy extensions.

      A lot of people without a clue will scream and cry that OpenGL is faster, easier and can do more, but frankly, if that was the case, more people would use OpenGL in the games they write. OpenGL is a nice api and I use it a lot, my library DemoGL is based on it, and if OpenGL dies it will break my heart, but when I think realisticly, OpenGL is practically dead on Windows: the ICD connector DLL (opengl32.dll) isn't updated by Microsoft, documentation SUCKS compared to D3D, and f.e. ATi's OpenGL driver is horrible, making developing OpenGL software much harder than D3D based software.

    • Ahhh transgaming, another big idea with a very inactive CVS.
  • Um... (Score:5, Informative)

    by Penrif (33473) on Wednesday August 15, 2001 @05:04PM (#2139083) Homepage
    Give OpenGL some credit here. In some ways, it's D3D that has to catch up. Here's how it was discribed to be by a Very Smart Person [uiuc.edu] who works with nVidia a lot. nVidia comes to Microsoft saying "we want these features", Microsoft says "Okay, do it this way". The engineers at nVidia get frustrated about being limited by Microsoft's model and implement new features anyway and put them in OpenGL extensions. So, D3D has a better spec (arguably), but OpenGL has access to all the features.
    • That's the horrible thing in OpenGL land: you have to check out extensions, lookup what do they do, look in crappy written PDF files how to use 'em. And then find out that feature isn't implemented AT ALL on other cards. Like register combiners, or the nvidia proprietry shader extensions. If you use 'em in your code, forget it on an ATi board. It's nice nVidia's engineers thought they rule the world and have implemented the features in OpenGL anyway, but in the long run, it only hurts them: there is no consistent model for f.e. register combiner functions or vertex/pixel shaders in OpenGL: for each vendor you have to dig deep in the vendor's extensions docs. And I can tell you: that isn't funny anymore. Ever tried to look up a decent 2 page doc that describes nicely and without presentation sheets from Marketing how f.e. cubemapping has to be implemented? Thankfully there is now an ARB extensionset that does this. But don't expect from nVIdia they'll give you a nice document that describes it nice and easy. Like the d3d docs.
    • Re:Um... (Score:3, Insightful)

      by be-fan (61476)
      Microsoft says "Okay, do it this way".
      >>>>>
      Is this really such a *bad* thing? K & R wrote the UNIX API and said "do it this way." Is anyone complaining? IEEE standardized the API into POSIX and told people to "do it this way." POSIX is perhaps the most successful OS API in history. Somtimes, a nice standard is just better than some additional freedom (especially when that freedom is for hardware developers, which aren't highest on my ethics list).
      • >>Is anyone complaining?

        What's interesting is your nick. See, Be people have written their OS precisely to dispose of old garbage like Unix API.

        >>POSIX is perhaps the most successful OS API in history.

        Notice that most OS's are not Posix compliant, not Linux, not Windows, not Mac, not Be, not Atheos, not Hurd. Full Posix compliance is hard to find.
        • Umm, the UNIX API isn't garbage, the garbage is some of the stuff that has evolved around it.

          As for POSIX complience, Windows 2000 (Cygwin) Linux, BeOS, AtheOS, Hurd, and MacOS X will run 99% of all (non-GUI) POSIX software out there. I'd say that that's pretty damn successful.
      • It is true what you say but it's not a proper analogy in this case as DX a closed standard owned by a monopoly notorious for using standards in a very destructive way to the community.
  • by Atrophis (103390)
    OpenGL is as good if not better then Direct3D.
    think about it, OpenGL spec has not changed in how long and what kind of games are being produced with it? ... quake3, doom3, tribes2.
    basically all the best. im not a graphics programmer, but i think its safe to say something is good when the best choose it.
    • Look at that list... (Score:3, Informative)

      by lowe0 (136140)
      2 of those are Carmack games. He loves GL (can't remember what pissed him off about D3D, maybe he'd like to tell us?)

      Tribes2 is multi-API. So are some other biggies (Unreal Tournament comes to mind.)
      • can't remember what pissed him off about D3D

        I think he started favoring OpenGL in the days of Quake 1, about the time of the appearance of the Voodoo 1. To support that card (and future accelerators, of course), he attempted to port the Quake software renderer to both OpenGL and DirectX.

        He succeeded with OpenGL in a single weekend. With DirectX, however, the API at the time (DirectX 1.0? 2.0?) was crap, or it was poorly documented. So he gave up on it.

        I don't really know why he still doesn't use DirectX. Maybe he's just being consistent :)
        • He succeeded with OpenGL in a single weekend. With DirectX, however, the API at the time (DirectX 1.0? 2.0?) was crap, or it was poorly documented. So he gave up on it.

          3.0, as far as I know... the original story can be found from many places (here, for instance [vcnet.com] - look at the appendix in the end).

    • Tribes2 is a bomb, even the company that made it thinks it stinks. Quake 3 is nice, but I think UT on D3D is just as nice, DOOM3 ?? is it out or just being developed ?
  • http://www.berlin-consortium.org/

    OpenGL, CORBA and Unicode display server ....

    We'll all be writing 3D super life like games in Python in a while ...
  • Come on! (Score:3, Redundant)

    by SpanishInquisition (127269) on Wednesday August 15, 2001 @04:42PM (#2140443) Homepage Journal
  • drooool... (Score:2, Informative)

    DX8 was nice and feature rich to be certain, but that still didn't stop companies like NVidia from putting extensions into OpenGL to accomplish the same things..

    NVidia OpenGL bad-ass extenstions [nvidia.com]

    NVidia DX8 SDK [nvidia.com]

    both contain very similar stuff you'll find i think, and I've always found OpenGL to be a better interface anyway. DX8 is night and day better than DX7 or before, but still carries a bit of the bloat around the middle that DirectX is famous for...

    • Extensions vs Core (Score:5, Insightful)

      by throx (42621) on Wednesday August 15, 2001 @05:39PM (#2157658) Homepage
      The problem with extensions is that at least with DX8 you can write a pixel shader program once and expect it to work on any cards that support that version of the pixel shader (1.0 if you want to be conservative).

      If you go with OpenGL you have to write your program for each different vendor extension that comes out. Honestly, what are the chances of ATI or PowerVR ever supporting NV_texture_shader or NV_texture_shader2?

      One of the main aims of DirectX was to avoid the situation in the days of games under DOS where a game developer would have to write different code for each different target video card. Through the use of vendor extensions, OpenGL does no better than DOS did - requiring the developer to figure out exactly which cards he is going to support and writing to those extensions individually, also sacrificing future compatibility if the next generation of cards support different extensions.

      Writing to DirectX gives some degree of future-proofing your application as forward compatibility of the core API is preserved through revisions of DirectX. Sure this may carry a bit of "bloat around the middle" but that's the price you pay for compatibility.

      Of course, if you aren't writing for Windows you're stuck with extensions.
      • by .pentai. (37595)
        That's what ARB extensions are for.
        Standardized extentions which aren't necessarily part of the spec, but, they work the same accross implementations. I don't think it'll be too long before we see some sort of standardized shader extension. But then, if you have to write microcode for the vertex shaders etc. then don't you have to do that over anyways unless the cards are binary compatible with their shader processors. I don't know about you, but I wouldn't trust DirectX to take my optimized per-vertex code and translate it to a different shader language set.

        ...but then what do I know :P
      • by marm (144733) on Wednesday August 15, 2001 @08:18PM (#2139846)

        If you go with OpenGL you have to write your program for each different vendor extension that comes out. Honestly, what are the chances of ATI or PowerVR ever supporting NV_texture_shader or NV_texture_shader2?

        I'd put the chances quite high if it's a decent spec. Perhaps it might not be called NV_texture_shader in a year's time, it'll be ARB_texture_shader, and as an ARB-mandated extension will end up being supported by every sane driver with the required hardware support. You can bet that the NVidia drivers will still support the old NV_texture_shader as well though.

        This is the way the OpenGL spec grows. Manufacturers are free to go ahead and implement whatever features they'd like or need in OpenGL, and they implement them as vendor-specific extensions. If someone else needs that functionality for their driver, well, before you know it the vendor-specific extension will become ARB-mandated, and probably pored over and refined a little by all the OpenGL board in the process - a board which consists of all the major 3D hardware and software manufacturers. Shortly after, most drivers will support that. Eventually the ARB extensions will probably be integrated into the next OpenGL base spec, as just happened with OpenGL 1.3.

        So, there's no one single vendor controlling the spec. 3D vendors can be as creative as they want. Only if a feature becomes used on several different 3D architectures does it become a standard. Your code will continue to run fine on architectures where you used the vendor-specific extensions, as the vendor will almost certainly keep the original functionality around indefinitely as well as supporting the ARB-mandated version of it. If you want, you can go back a little later and use the ARB extension instead in your code, and the functionality becomes available to everyone.

        By using DX8 instead of OpenGL you know that effects designed for the NVidia pixel shader will magically just work on the next-generation Radeons. At the same time, you're handing over control of the whole API to Microsoft, which does not make 3D chipsets, and you're stuck with their idea of how the pixel shader ought to work, as opposed to an API for it designed by the company that makes the chipsets, and then later (if it's successful), reviewed and revised by everyone important in the industry. I won't even start on the cross-platform issues.

        Your choice.

        • Thanks for the great reply.

          You are right. Microsoft isn't a hardware vendor. They are a software vendor which is simply the other side of the API. There is no good argument that an API should be specified by either side, so long as it is done in consultation with the other.

          I agree that using DX is trusting Microsoft to do the "right thing" by hardware companies, but I'd also argue that it is well within their interests to do just that. If they invent a spec that just sucks then they are only harming their own APIs acceptance by the game industry as a whole - hardware and ISVs. Microsoft has been very fast to incorporate the latest hardware advances in DX and to work closely with the hardware vendors to ensure they converge on an interface that is manageable and uniform.

          Remember DX8 has been available to ISVs for over a year now, even long before the nVidia specs on the GF3 were available. DX certainly gives you the time advantage over OpenGL. Waiting for the ARB spec to come out isn't the best solution for a game designer who wants to get their game using the latest hardware as soon as possible. A game developer who wrote to the DX8 spec could be sure their game will have life on the top level cards produced by all hardware manufacturers simply through MS's weight in the marketplace.

          I'll happily grant that the OpenGL extensions for the GF3 are going to be much more closely aligned to the hardware than DX8 pixel shaders are. I'd expect that to be the case seeing they are vendor designed extensions for their own chipset.

          What it seems to come down to if you want the latest and greatest hardware support in your software (assuming you are a Windows-only designer) is to either support the latest DX and trust Microsoft to have the weight to pull the hardware designers into line (a pretty sure bet), or support OpenGL vendor extensions and hope the vendors don't change them, implement different ones or settle on a totally different ARB extension and write different code for each card you plan to support.

          It makes DX very attractive if you are a Windows developer, especially given the fact you are almost certainly using other DX components to handle audio, input and force feedback.
        • By using DX8 instead of OpenGL you know that effects designed for the NVidia pixel shader will magically just work on the next-generation Radeons. At the same time, you're handing over control of the whole API to Microsoft, which does not make 3D chipsets, and you're stuck with their idea of how the pixel shader ought to work, as opposed to an API for it designed by the company that makes the chipsets, and then later (if it's successful), reviewed and revised by everyone important in the industry. I won't even start on the cross-platform issues.

          Actually, it's devised by several companies that make chipsets, who Microsoft works with closely to ensure that (a) desired, and (b) feasible.
          • Actually, it's [DX8 API] devised by several companies that make chipsets, who Microsoft works with closely to ensure that (a) desired, and (b) feasible.

            So how do you explain the fact that the DX8 pixel shading API doesn't support everything the GeForce3 does? Surely if Microsoft had really worked that closely with NVidia there would not be this difference?

            Or perhaps Microsoft dumbed down the API to only support what it thought would be common features to all procedural pixel shaders?

            Either way, it sure doesn't sound like NVidia designed it.

            Remember, no matter who is involved with Direct3D, in the end, the only entity actually controlling it is Microsoft. They have the last word on what goes in and what stays out. They're the people who write and distribute the code that forms the DX8 APIs. If they don't like it, you don't get to use it.

            With OpenGL, the people with the real power are the developers of end-user software. It's them who get to decide what vendor extensions to take or leave, not Microsoft. Further down the line, once an extension becomes popular... well, every OpenGL ARB member has the same rights as every other regarding choosing what becomes part of the standard spec.

            I thought most of us had decided that governance by consensus amongst equals was superior to dictatorship?

  • by evilpaul13 (181626)
    It still (after what about three years?) has yet to make it into WindowsNT/2000. I don't think MS is going to take this opportunity to update their ICD with all those Professional/CAD-like plane extensions that are rumored to be getting incorporated into DirectX 9.
  • OpenGL can be used over X, and displays appropriately for the hardware you have at the recieving end. This is an important factor with any professional 3D software - not just rendering stuff, but CAD as well. If you want to run something on a big box or collaborate with others, you'll want to be able to do things remotely. I was running fairly involved stuff on an SGI Powerchallenge and displaying it on a pentium 75 box with a video card that could have been a lot better.

    What would be great would be team based 3D game under X that gives you little windows showing the point of view of the others on your side, just like the views from the marines's cameras in "Aliens." With OpenGL it wouldn't be too much of a hassle to export the views from each machine and re-scale them.
    A very important feature would be for the screens to go to static as each team member goes down :)
    • Hate to rain on the OpenGL parade, but this has not worked with any DRI version of XFree86 I have seen in quite awhile. This is from a Linux box to an Irix server, vice-versa, and between two Linux boxes. I would think it is quite technically possible but it does not appear there is enough interest and software rot has set in.

      Also GLX (or whatever Irix called it when you went between machines) was never very fast. People would always complain to me that their program was slow, and I would discover that they had accidentally rsh'ed into another machine. In some ways it would have been preferrable if it didn't work at all, so people would know immediately not to do that.

      It is too bad but I would have to say this is not a selling point of OpenGL anymore.

  • More realistic porn action in our favorite 3d shooters :D
  • by Anonymous Coward
    Developers now have a card vendor neutral way to access programmable shaders (pixel and vertex shaders) from DX8. But does OpenGL1.3 have anything comparable, or do we have to resort to NVidia or ATI extensions? If that is the case, OpenGL will be hard hit unless a standard vendor neutral extension it added soon.

    Well, it wasn't listed in the "core features" in the press release, so I doubt it.

  • by Proud Geek (260376) on Wednesday August 15, 2001 @04:41PM (#2143560) Homepage Journal
    Thanks for the info [slashdot.org] . That's very informative. Please do tell, though, what's the difference between a "spec" and a "specification" that makes it worth repeating?
  • by room101 (236520) on Wednesday August 15, 2001 @04:48PM (#2143683) Homepage
    I have said it before (yes, an OpenGL troll), and I'll say it again: OpenGL can do anything D3D 8.x does. It just does it in a different way.

    OpenGL uses extensions, so you don't have to rev the version number to add funtionality, you only have to have supporting drivers (and/or hardware).

    That is why OGL hasn't been rev'ed in so long, it didn't need it, so you can provide a stable API for the developers.

    It is just cleaner to have this new stuff "built-in", so they do it every now and then.
    • by MikeTheYak (123496) on Wednesday August 15, 2001 @07:36PM (#2141881)
      Unfortunately, this is a bit like saying, "C++ can do anything D3D 8.X does, given proper libraries and hardware support." The purpose of a standardized API is, well, having a standard. If five different vendors implement extensions for a vertex shaders, each using five different extensions used in different ways, what's a developer to do?
      • by Anonymous Coward
        "Unfortunately, this is a bit like saying, "C++ can do anything D3D 8.X does, given proper libraries and hardware support." The purpose of a standardized API is, well, having a standard."

        DirectX is a great standard for 3D graphics.
        I've spent many a day playing DirectX games on Linux/BSD/Macintosh/BEOS/OS2/QNX/Amiga.
        Oh crap! What am I saying? DIRECTX DOESN'T SUPPORT ANYTHING NON MICROSOFT!!!
        Yes that's a standard alright.
        Standard piece of BS.
    • I've been considering tinkering with OpenGL graphics for awhile, and I have a small question that someone more experienced than I can answer.

      As said here a bunch of times, OpenGL relies on extensions to expand it's functionality. AFAIK, both NVIDIA and ATI offer these extensions for their cards (as well as a lot of extensions from other developers).

      If both ATI and NVIDIA release OpenGL extensions to support new feature x, is there something that keeps developers from having to implement feature x twice, for each api/card, compared with DirectX where there is one standard way to do it?
      • It should be pointed out that in DirectX, there is one standard way to do it, but if that standard doesn't exist, then it can't be done.

        OpenGL is more flexible in this way. The only problem is that some things need to be migrated from vendor to multi vendor extensions faster. As soon as both Nvidea and ATI have a new feature, it is time for the Gl commitee to hammer out an initial draft for a multi vendor extension I think.
      • by Anonymous Coward
        The usual evolution of extensions are as follows(given that they are useful extensions):
        Vendor specific extension(NV_ , ATI_, SGI_, etc.)
        Multivendor extension(EXT_)
        ARB extension(ARB_ )

        Then it might be let into the core, if it's really that useful and supported by the companies sitting on the ARB. When the extension hits the EXT_, you can pretty much count on it beeing supported on chips that matter.

    • I cannot repeat it enough. OGL uses extensions....yeah, _VENDOR_SPECIFIC_ extensions.

      Until the day we get the extension starting with "ARB" it is not a true extension that everyone can use. I only consider standard extensions to be part of OGL. What's the point of using the extension if it only works for one card, and you'll have to write different code in order to support another card?

      In this aspect, D3D is doing very well - even though some features are supported only by 1 chip today (e.g. pixel shaders), the interface is neutral. It ensures your code will work without even a recompile when a newer card from another company comes out. Can we do it in OpenGL?

      Although I still use OGL, I might not be soon unless *vendor-independent* extensions come out faster.
  • OpenGL is simply faaaabulous and quite obviously offers the environment only you could dream about on a 21inch monitor, and the latest NVidia card playing the all time, positively best 3D game ever concieved.

    Which of course everyone on this board knows, is...

    Homeworld!

    :-)

    You ARE the mothership when you have a 64MB Video card and a 21 inch Sony G500 monitor out there in "The Wastelands" in 4 player internet gaming mode on Won.

    The tactical, combat and collaboration OpenGL gives to the game is spectacular.

    Nothing is more breathtaking than to radio for Help! to your teammate while your outgunned and outnumbered as the two bastards decide to double team ya. On no! 5 Destroyers and 2 Heavy Cruisers?!! Heeeeeeeellllllllllpppp!!!

    ^%#@^&%#&

    As your Mothership burns and the sniveling little idiots radio in, "Yeah, this guy sucks...."

    Only to see them run like scared chickens duley beheaded when that moment comes....

    Hyperspace Signature Detected.

    No, it isn't just a frigate...

    Sweet Mother of Pearl its the entire fleet!

    Sweet Jesus! 3 Heavy Cruisers and 8 Destroyers comming out of Hyperspace like the calvery at 1600x1200 in 32bit Open GL mode!!!!

    2 Minutes later the two assholes fleets are burning at high res and in 32bit color!

    My god its its beautiful.

    -hack
  • instead (Score:2, Insightful)

    by linuxpng (314861)
    why not hope GL1.3 exceeds DX8 to make it more attractive to developers. We needs these guys seeing GL as a standard they can count on. It's really a messed up situation when a proprietary API is deemed a "standard".

Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it. -- Perlis's Programming Proverb #58, SIGPLAN Notices, Sept. 1982

Working...