Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Graphics Open Source Programming

Khronos Releases OpenGL 4.2 Specification 98

jrepin tips news that the Khronos Group has announced the release of the OpenGL 4.2 specification. Some of the new functionality includes: "Enabling shaders with atomic counters and load/store/atomic read-modify-write operations to a single level of a texture (These capabilities can be combined, for example, to maintain a counter at each pixel in a buffer object for single-rendering-pass order-independent transparency); Capturing GPU-tessellated geometry and drawing multiple instances of the result of a transform feedback to enable complex objects to be efficiently repositioned and replicated; Modifying an arbitrary subset of a compressed texture, without having to re-download the whole texture to the GPU for significant performance improvements; Packing multiple 8 and 16 bit values into a single 32-bit value for efficient shader processing with significantly reduced memory storage and bandwidth, especially useful when transferring data between shader stages."
This discussion has been archived. No new comments can be posted.

Khronos Releases OpenGL 4.2 Specification

Comments Filter:
  • And the reason for all of this new stuff is, what exactly? Playing the devils advocate: I can render tits just fine already, so why do I need this feature bloat as well? From a practical perspective: The old standard is ruining fine on games I love, and a new standard will just complicate things and force people to upgrade to hardware since it looks like some of this stuff would be difficult at best to do in software only and get good perf. From a commercial perspective: Great, now I have increased devel
    • Seriously? Taking coded features out of the software layer and placed on the hardware layer can multiply the speed of operation by an order of magnitude. OpenGL is far behind DirectX in that sense. DirectX is in many ways easier, and faster because of it. OpenGL needs to ditch some of features that they are holding onto for the backwards compatibility. Anything older than 7years should be on the chopping block if it isn't needed.

      • I generally agree if its not used its bloat but lets face it, we need to worry about the open source software stack as well. A big part of lock-in is video, after all. That is why Open Source Game Development is usually so stagnate, with few but always notable exceptions.
      • by Anonymous Coward on Monday August 08, 2011 @05:37PM (#37027770)

        I don't know what delusion planet you are posting from but here in the Real World OpenGL has left DirectX in the dust in both features and performance a long time ago.

        Khronos is absolutely on fire with giving developers what they want as quickly as possible. OpenGL developers have access to the absolute bleeding edge features of new graphics cards that people who are stuck still using DirectX have to wait around for Microsoft to get off their ass and implement.

        It shouldn't be surprising OpenGL has won the API war with Microsoft:

        210 Million OpenGL ES based Android devices a year.

        150 Million OpenGL ES based iOS devices a year.

        Every Linux, Mac,and Windows machine

        The dying PC games market and the last place Xbox 360 are the only places left in the world still using the dead end DirectX API.

      • by Ryvar ( 122400 )

        They did that already. As of OpenGL 3.1 the only non-deprecated rendering method is Vertex Buffer Objects. Link [].

        There are a lot of things OpenGL could do to make itself more accessible - better-supported crossplatform utility libraries, three or four shortcut commands that set the various glEnable() states that 95% of new developers actually care about, streamlining eyebrow-raising pile of mipmap generation options, the entire process of setting up a vertex buffer object could be MASSIVELY simplified...


        • Microsoft might have been Slashdot's Great Satan for a long time, but they do listen to the sort of developers they're hungry for, and DirectX is one of the better examples of that.

          Well, it used to be, but they really screwed up by not supporting DirectX 10 on XP. If you use DirectX 10, you are limited to the operating systems with around 40% of the market. If you use DirectX 9, you get another 50%, but you're limited to old features. On the other hand, Intel, nVidia and AMD all support OpenGL - with all of the latest shader functionality exposed, either as part of the core standard or via extensions - on XP, Vista, and Windows 7. Oh, and you also get another 5-10% from OS X, if y

    • by Ryvar ( 122400 )

      I'm a newbie at this stuff, but here goes:

      "single-rendering-pass order-independent transparency" - let's say I have three translucent objects at roughly the same depth, with parts of one in front of and behind parts of the others (and maybe the same is true for objects B and C as well). Figuring out the correct draw order is absolute fucking murder, and there still isn't a generalized approach for anybody but the most advanced of the most advanced (like Dual depth peeling [] or making convex hulls out of all

      • This was the sort of reply I wanted, asked for, but never got before your post (while being trolled and attacked for asking). Thank you.
        • In that case, I'd suggest phrasing the question a bit less inflammatory next time :) You did not come across as a nice guy who would like to know the concrete benefits and drawbacks.

      • Figuring out the correct draw order is absolute fucking murder, and there still isn't a generalized approach for anybody but the most advanced of the most advanced (like Dual depth peeling [] or making convex hulls out of all translucent geo in the scene).

        Is this one of those things you would get practically automatically with ray-tracing? It seems to me that a z-buffer just isn't capable of adequately dealing with this kind of situation unless you actually want to sort the objects by depth for ev

      • Just a note on OIT, with regards to DirectX: Since the fixed function pipeline was obsoleted in favor of the programmable shader based approach API implemented OIT has been orphaned as something of a strange hybrid between a crutch for people just getting into advanced topics and a luxury item.

        DirectX is primarily focused on high-end games, and nowadays most games (it would seem) use some variation of Deferred Lighting. The type of deferred lighting you use would determine which, of the many, sorting appr

  • Soviet Russia no longer exists. Were you expecting something else?

    In Soviet Russia, something else expects you!
  • by BlueParrot ( 965239 ) on Monday August 08, 2011 @05:52PM (#37027904)

    Perhaps somebody in the know can enlighten me about this.

    I see many fairly advanced features and functions in both the DX and OpenGL APIs , but I was under the impression that a modern graphics cards were basically designed to do a few fairly primitive operations very well and in parallel. So basically, how much of these APIs actually deal with interfacing the graphics card and it's hardware accelerated features, and how much of it is more along the lines of just a standard library that contains frequently used graphics algorithms?

    Maybe my view of how programming is done these days is a bit naive, but I've always sort of felt there was a difference between the APIs that are there in order to let you use the hardware without mucking around with terribly low level and platform dependent stuff like interupts and so on, and on the other hand just standard libraries that is pretty much things where the code would be more or less the same on most platforms, but you just don't want to write it all over again whenever you make a new program ( things like some container class for C++ ).

    My idea of what OpenGL and DirectX did was to let you access the features of the video card without having to worry about all the little differences between one card and another. So you could send the card a bunch of textures or something without having to rewrite the code for every card you wanted to run on.

    Am I missing a lot here? Do the OpenGL and DirectX APIs also deal with a load of stuff that is just generally handy to have around when writing graphics programs?

    • by TheRaven64 ( 641858 ) on Monday August 08, 2011 @06:17PM (#37028100) Journal

      (Disclaimer: Simplifications follow.)

      Originally, there was OpenGL, which provided the model of a graphics pipeline as a set of stages where different things (depth culling, occlusion, texturing, lighting) happened in a specific order, with some configurable bits. There was a reference implementation that implemented the entire pipeline in software. Graphics card vendors would take this and replace some parts with hardware. For example, the 3dfx Voodoo card did texturing in hardware, which sped things up a lot. The GeForce added transform and lighting, and the Radeon added clipping.

      Gradually, the blocks in this pipeline stopped being fixed function units and became programmable. Initially, the texturing unit was programmable, so you could add effects by running small programs in the texturing phase (pixel shaders). Then the same thing happened for vertex calculations, and finally you got geometry shaders too.

      Then the card manufacturers noticed that each stage in the pipeline was running quite similar programs. They introduced a unified shader model, and now cards just run a sequence of shader programs on the same execution units.

      As to how specialised they are... it's debatable. A modern GPU is a turing-complete processor. It can implement any algorithm. Some things, however, are very fast. For example, copying data between bits of memory in specific patterns that are common for graphics.

      Modern graphics APIs are split into two parts. The shader language (GLSL or HLSL) is used to write the small programs that run on the graphics card and implement the various stages of the pipeline. The rest is responsible for things like passing data to the card (e.g. textures, geometry), setting up output buffers, and scheduling the shaders to run.

      • The GeForce added transform and lighting


        Oh, how excited I was to get a GeFORCE256 card and talk about T&L in hardware in my home PC... Ironically I worked with an SGI RE2 about two feet away from me at the time and couldn't get as excited about it (have you ever worked with Irix? LOL.)


    • So basically, how much of these APIs actually deal with interfacing the graphics card and it's hardware accelerated features

      Most of these APIs. OpenGL und D3D are basically meant to be thin, portable layers encapsulating the capabilities of (some generation of) the graphics hardware.

      and how much of it is more along the lines of just a standard library that contains frequently used graphics algorithms?

      Not much of it. You can use the GLU (GL utililities) library for some software utility functions (basically just convenience or comfort stuff, no thick API layers). Even for very basic stuff like matrix multiplications you have to use 3rd party libraries (if you need to do it on the CPU, rather than in a shader on the GPU). The API implementations ma

  • But DirectX is no better.

    The future lies in directly programming the hardware with a classical programming language, building your own renderers in software, hopefully not limited by outdated polygon technology.

    • Wait , I really hope you are not saying VOXELS are ready for prime time over the standard polygon model?
      • by Ryvar ( 122400 )

        It's a troll or loufoque is a bit detached from reality, but this does bring up an interesting point: a lot of what people are looking into these days in terms of rendering is voxels drawn using polygons. Minecraft? Basically those tiles are voxels being rendered as an uniform convex hulls - lends itself to some amazing efficiency.

        This [] is even more interesting from a technical perspective - stretching isosurfaces across voxel terrain to create a truly malleable world.


      Wait, you're serious?


      NO games programmer wants to get involved in bare hardware coding. That would require so much redundant code to be written, and testing would be an absolute nightmare. Even the vaunted Intel Larrabee design was going to have drivers and code so that it would appear to games as a regular OpenGL/DirectX card. You could write your own code, sure, but it would default to acting just like any other card (as far as the software can tell).

      • Ever heard of engines and other middleware?

        • Even engine developers don't want to do that unless necessary for some really, really cool feature (realtime ray-tracing, maybe). It's just far too much work.

  • Who owns the OpenGL spec? Previously, it was SGI, but once they threw it open, isn't it up to a SIG, or something like that? Or is Khronos the sole owner now? If not, how do they release any OpenGL spec?
  • We are already at 4.2...
    Wow, GL moves fast, but who cares, when you need to force your users to update their drivers so you can have the bare minimum "new" features.

    Seriously, what's the rate of adoption? Perhaps it was never labeled properly but don't all default installs of GL support like 1.x? What drivers and cards provide support for GL 2.0+?
    And most importantly why should I bother developing in a newer version of GL, if I don't know if the user will be able to update to the right version to run a game

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors