Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Open Source Programming

Khronos Releases OpenGL 4.2 Specification 98

jrepin tips news that the Khronos Group has announced the release of the OpenGL 4.2 specification. Some of the new functionality includes: "Enabling shaders with atomic counters and load/store/atomic read-modify-write operations to a single level of a texture (These capabilities can be combined, for example, to maintain a counter at each pixel in a buffer object for single-rendering-pass order-independent transparency); Capturing GPU-tessellated geometry and drawing multiple instances of the result of a transform feedback to enable complex objects to be efficiently repositioned and replicated; Modifying an arbitrary subset of a compressed texture, without having to re-download the whole texture to the GPU for significant performance improvements; Packing multiple 8 and 16 bit values into a single 32-bit value for efficient shader processing with significantly reduced memory storage and bandwidth, especially useful when transferring data between shader stages."
This discussion has been archived. No new comments can be posted.

Khronos Releases OpenGL 4.2 Specification

Comments Filter:
  • Re:Are they nuts? (Score:3, Informative)

    by wazzzup ( 172351 ) <astromacNO@SPAMfastmail.fm> on Monday August 08, 2011 @05:44PM (#37027250)

    PARENT LINK IS GOATSE

  • by TheRaven64 ( 641858 ) on Monday August 08, 2011 @07:17PM (#37028100) Journal

    (Disclaimer: Simplifications follow.)

    Originally, there was OpenGL, which provided the model of a graphics pipeline as a set of stages where different things (depth culling, occlusion, texturing, lighting) happened in a specific order, with some configurable bits. There was a reference implementation that implemented the entire pipeline in software. Graphics card vendors would take this and replace some parts with hardware. For example, the 3dfx Voodoo card did texturing in hardware, which sped things up a lot. The GeForce added transform and lighting, and the Radeon added clipping.

    Gradually, the blocks in this pipeline stopped being fixed function units and became programmable. Initially, the texturing unit was programmable, so you could add effects by running small programs in the texturing phase (pixel shaders). Then the same thing happened for vertex calculations, and finally you got geometry shaders too.

    Then the card manufacturers noticed that each stage in the pipeline was running quite similar programs. They introduced a unified shader model, and now cards just run a sequence of shader programs on the same execution units.

    As to how specialised they are... it's debatable. A modern GPU is a turing-complete processor. It can implement any algorithm. Some things, however, are very fast. For example, copying data between bits of memory in specific patterns that are common for graphics.

    Modern graphics APIs are split into two parts. The shader language (GLSL or HLSL) is used to write the small programs that run on the graphics card and implement the various stages of the pipeline. The rest is responsible for things like passing data to the card (e.g. textures, geometry), setting up output buffers, and scheduling the shaders to run.

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...