OpenGL 1.5 130
Yogijalla writes "SGI and OpenGL ARB just announced
the OpenGL 1.5 specification, introducing support for a new OGL Shading Language. Also, check out the new Java bindings to OpenGL.
OGL 1.5 is a step towards the OGL 2.0, already suggested 2.0 by 3DLabs." Also worth pointing out that OpenML SDK has been released as well.
Gotta love (Score:5, Funny)
OGL 1.5 is a step towards the OGL 2.0, already suggested 2.0 by 3DLabs.
The article confuses me. (Score:5, Interesting)
Inquiring minds want to know!
Re:The article confuses me. (Score:4, Informative)
Re:The article confuses me. (Score:2)
I didn't know Java3D was on hold -- I've been trying to write a program with it for the past few months too.
I wonder what this is going to be like. I'm not sure if I would prefer a thin wrapper or a totally new API.
Re:The article confuses me. (Score:2)
Re:The article confuses me. (Score:5, Informative)
We're involved in various efforts to replicate and replace Java3D with another scene graph. If you need something immediate, take a look at the Xith3D project from Dave Yazel and the MagiCosm guys. Alternatively, if you can wait a couple of months, we're in the process of ripping our OpenGL scene graph out of the core of Xj3D [xj3d.org] to turn it into a separate project. The difference between the two is that Xith3D is focused on games (it has primitive objects for particle systems, for example) where our work is focused on CAVEs, powerwalls, linux clusters etc.
Re:The article confuses me. (Score:4, Informative)
Re:The article confuses me. (Score:2, Informative)
When evertything has been cancelled, and their developers are now not answering emails on the java3d list, and have been assigned to other parts of Sun, it is dead. Also, we already have been told that we'll have access to parts o
Re:The article confuses me. (Score:3, Interesting)
It'll be interesting what happens with the patents, particularly the
This might help... (Score:2)
2.0??? That's nothing!... (Score:1, Redundant)
comparison operator? (Score:2)
I'll assume slashdot ate your "less than" operator. Hey slashdot, can we please use angle brackets that aren't HTML tags?
Re:comparison operator? (Score:1)
You mean like this:
< = <?
Definition (Score:2)
A suggestion 2.0 is a suggestion you can't refuse.
Somewhat old, it's been there since Monday... (Score:5, Interesting)
I wonder when these will become standard (not just as an ARB extension but as an ARB required feature). Hopefully in 2.0? It will save a lot of calls, at the very least--just check the version number of the GL implementation, no more searching extension strings...
Re:Somewhat old, it's been there since Monday... (Score:5, Informative)
I wonder when these will become standard (not just as an ARB extension but as an ARB required feature). Hopefully in 2.0? It will save a lot of calls, at the very least--just check the version number of the GL implementation, no more searching extension strings...
You can see at
http://www.opengl.org/developers/about/arb/note
That the reason that it was called 1.5 and not 2.0 is that the shader language didn't make it to the core specs, and that they plan to promote it to core specs (and release 2.0) soon, possibly even in the next ARB meeting in september or the one thereafter.
Here is the quote
Re:Somewhat old, it's been there since Monday... (Score:2)
I think it was a marketing error. They should have called it OpenGL 2.0 Full Speed and reserve OpenGL 2.0 Hi-Speed for the future core spec.
Sorry, couldn't resist a joke referring to a recent turmoil [slashdot.org] in USB packaging names, which have been clarified [slashdot.org] later.
Re:Somewhat old, it's been there since Monday... (Score:2)
Then why didn't you submit the story on Monday?
Re:Somewhat old, it's been there since Monday... (Score:2)
Woohoo (Score:5, Interesting)
That's _thee_ key feature Apple needed to do the fully OpenGL desktop, along with a pile of more eligant error handling of course. Glad to see it's now standard.
It also makes the modeling and artist guys much happier. Do you have any idea how hard it is to make everything out of squares?
2.0 should put the last of what we need for the next 5 years into OpenGL, then maybe people can start writing more portable games again.
Re:Woohoo (Score:5, Informative)
But there have always been tools to circumvent the power-of-two limitation. You can always use only part of a texture on a primitive (triangle, quad, etc.). There are tools to tile non-power-of-two textures on a power-of-two texture to minimize memory usage.
At least in game dev, it didn't really prove to be useful. (Except for those artist whiners who insist that they can use any size image that comes out of Photoshop. *laugh* just kidding guys... I hear ye)
I'm just thinking that Apple wouldn't be making the late introduction of non-power-of-two textures into OpenGL as a plausible excuse for not making the desktop fully-OpenGL already. Besides, they can always introduce Apple-specific extensions--why didn't they do that already? Methinks they're just lazy.
Re:Woohoo (Score:3, Informative)
Re:Woohoo (Score:1)
Re:Woohoo (Score:4, Informative)
As you point out, this is primarily useful with video textures, but any game that does animated textures can take advantage of it.
Re:Woohoo (Score:4, Informative)
And Apple has a loooong list of OpenGL extensions. They implemented this all some time ago now. You should see some of the realtime video editing that they can do.
Nothing gets into OpenGL that wasn't an extension first - that way it's all developed and debugged before we have to deal with it.
Re:Woohoo (Score:3, Informative)
You're right about the hack-nature of what I proposed though. In practice, however, you only lose very little texture memory, if you have a good tiler. Besides, I believe that non-power-of-two textures will suffer some memory
Re:Woohoo (Score:1)
>> OpenGL desktop,
>
> But there have always been tools to circumvent the
> power-of-two limitation. You can always use only part
> of a texture on a primitive (triangle, quad, etc.).
The gotcha here is that Apple needs to take a window backing store, a buffer, and apply that as a texture to the quad. Doing this by the old slice-n-dice into assorted power-of-two tiles is a pain, and an intermediate copy that just slows things do
Re:Woohoo (Score:2)
Apparently it is very difficult for developers to see wher
Re:Woohoo (Score:4, Interesting)
That's _thee_ key feature Apple needed to do the fully OpenGL desktop, along with a pile of more eligant error handling of course. Glad to see it's now standard.
I suspect it's for bind to texture, not anything that can already be done by just using part of the texture. Supposedly nVidia has been waiting for 1.5 before doing bind to texture in UNIX environments (currently only a WGL extension.) For me, on the FX, copy has actually turned out faster than bind, but that is hopefully just a driver limitation. Rectangular textures also have nice coordinates for using them in multi-layer programmable pipeline settings. (I haven't read the specs yet, just extrapolating from the nVidia extension.)
What about the postscript desktop? (Score:2)
I thought Apple's intentions were to give users a postscript desktop. OpenGL is simply a window to unused 3D hardware that could be used in place of an expensive proprietary postscript coprocessor (ie; like those old NExT cubes).
Isn't quartz extreme no more than an openGL wrapper?
Re:What about the postscript desktop? (Score:5, Informative)
Quartz Extreme, Apple's name for the Mac OS 10.2 version of the Quartz Compositor, uses OpenGL to render what you see on the screen. Note that just because it uses OpenGL doesn't mean that it uses any 3D - all it takes advantage of is 2D bitmaps, special effects like shadows, and alpha transparency.
But that's a really big deal! It means that all of the bitmaps representing your windows and other screen objects are in your graphic card's RAM, and moving a window in OS X doesn't require computing of any pixels at all on the CPU. (Unfortunately, resizing is slower, because every redraw requires sending a new bitmap, of a different size, to the graphics card.) This also allows them to do the Genie effect, window scale effects, and Expose superfast without wasting any CPU cycles. Compare that to Windows or X - when you move a window, all of the windows underneath it get repaint events, which can take a while to trigger depending on the application.
Note that the Quartz Compositor is a totally different thing than Quartz, the new 2D graphics library in OS X that is designed to replace QuickDraw.
The Quartz Compositor is what makes it possible for QuickTime movies to keep playing when you minimize them to the dock, transparent overlays to smoothly fade in and out when you hit a volume control key on the keyboard, and 10.3's fast user switching to literally "rotate" the screen in 3D to show the other user's desktop.
Quartz Extreme - i.e. using OpenGL to implement Quartz Compositor - is what makes it fast.
The great thing about Quartz Extreme is that Apple has only begun to explore the possibilities. The fast user switching effect probably only took them a day to code up, because all of the underlying technology was there.
Re:What about the postscript desktop? (Score:5, Informative)
Actually, that's due to the fact that the bits are in your computer's RAM. Quartz double buffers all drawing so that it doesn't flash and looks smooth. Because of this no redrawing has to be done when part of a window is revealed. On the other hand, resizing means that the buffer has to be reallocated and redrawn.
Re:What about the postscript desktop? (Score:1)
Re:What about the postscript desktop? (Score:4, Interesting)
The main problem with XFrees responsiveness is not whether it uses OpenGL or not (which ultimately makes little difference) but how it interacts with applications and how it pokes the video card. For instance very few drivers fully accelerate RENDER (which is 2D hardware acceleration for alpha channel blending and some other things). That means you end up doing very slow framebuffer reads, compositing in software then upload. I guess part of the reason for using OpenGL was to work around the reluctance of driver manufacturers to write specialist fully optimized drivers for their hardware.
Not to mention that most apps are very slow at processing Expose events. There has been talk of doing what MacOS does here and having apps directly render client-side into a compressed backing store.
Re:What about the postscript desktop? (Score:2)
Re:What about the postscript desktop? (Score:1)
Re:What about the postscript desktop? (Score:2)
The easy way (Score:2)
OpenGL and DirectX use 0.0-1.0 coordinates so it has no effect on the output (as far as the texture being mapped where it's supposed to be) as long as you don't cro
Doom 3 (Score:5, Interesting)
Re:Doom 3 (Score:5, Insightful)
Re:Doom 3 (Score:5, Insightful)
As to whether or not it's largely based on his work, however, that's another story. Honestly, there are tons of people working on the same thing that Carmack is. He's just the most well known, with the biggest profile. The technology behind Doom III, while interesting, really is just a natural, next logical extension of the current state of 3D graphics.
Re:Doom 3 (Score:3, Insightful)
Re:Doom 3 (Score:2)
According to my fuzzy recollection, JC was saying that Microsoft is paying them large amounts of money to hold off releasing until the XBox version is ready. That must be some large pile of money.
I think he also said that id would use the time profitably, to make Doom 3 even better. It's apparently been release-ready for quite some time.
Re:Doom 3 (Score:1)
For shame!
If you can wait til quakecon.... (Score:2)
playable beta version that we'll have for FREE
for a long damn time like they always do. Are you
old enough to remember when Quake3 Demo was the
most popular game on the net? If wasn't even a full
game. Nobody paid for it. They will do the exact
same thing with Doom3. Rumor has it they are
releasing the first multiplayer Doom3 demo next
month at quakecon. Considering they have already
announced they will be having Doom3 deathmatch
at quakecon, this rumor is probably clo
Re:Doom 3 (Score:3, Insightful)
Would like to try the OpenML SDK (Score:4, Informative)
http://www.khronos.org/openml/sdk/linux_requirem ents.html
Id really like to try this OpenML SDK, but it seems you are requered to enter your phone number - now why is that??
Re:Would like to try the OpenML SDK (Score:4, Informative)
*shrug*
SDL [libsdl.org] also offers crossplatform media functionality (and it beats the pants off of GLUT when working with OpenGL), and definitely doesn't have any such silly requirements.
Or you could just tell the OpenML people your phone number is 867-5309.
Jenny jenny (Score:2, Funny)
Re:Would like to try the OpenML SDK (Score:5, Interesting)
SDL is simply a low-level hardware abstraction layer. It doesn't even have geometric drawing code.
Re:Would like to try the OpenML SDK (Score:1)
But do developers have to give a phone number just to try it??
Re:Would like to try the OpenML SDK (Score:1)
Re:Would like to try the OpenML SDK (Score:4, Informative)
Re:Would like to try the OpenML SDK (Score:2)
Re:Would like to try the OpenML SDK (Score:1)
So: (Score:4, Interesting)
- What still remains that DirectX excels at that OpenGL is lagging behind at
- What of the things in the above two lists will be fixed by OpenGL 2.0, when/if it is adopted.
Re:So: (Score:5, Informative)
> - What still remains before we can say OpenGL is back toward its original goal [...] every card uses a different opengl "extention" to do the exact same goal.)
Well, I must say that OpenGL never really swayed from its original goal. OpenGL follows a pseudo-Bazaar philosophy--let vendors push for features they want, and if they get a massive following then it's good enough to put into the standard. I say pseudo-Bazaar because, unlike open source, this process happens somewhat too slowly for it to be competitive with DirectX.
> - What still remains that DirectX excels at that OpenGL is lagging behind at
The only thing that DirectX seems to excel at right now is standardized support for a lot of graphics programming constructs, i.e. its D3DX library (especially those mesh/skinning functions, quaternion arithmetic and the myriad transformation matrix builders it has by default--can anyone say D3DXMatrixShadow?
However, we can also say that DirectX contains too many features that won't be used by many (and in fact some of them do get dropped in subsequent API releases) OpenGL, on the other hand, tries to promote a *clean* standard, not a super-bleeding-edge standard that contains a lot of cruft. That is also the reason why OpenGL lacks the utility functions I mentioned in the above paragraph; many developers have a (portable) base of utilities that works well for them, and all they want is a (portable) API to display their graphics, not something like DirectX which coerces you to use the Microsoft-only stuff for mostly everything (including the math, which should be something the programmer himself should handle).
> - What of the things in the above two lists will be fixed by OpenGL 2.0, when/if it is adopted.
OpenGL 2 is a draft (and therefore moving) standard, and it will be largely up to the ARB to define what is being used by most applications to be declared fit for the standard. It doesn't necessarily mean it will beat DirectX in terms of functionality, but it will surely be a cleaner, more general standard that vendors will be happy to adhere to.
What I always wondered (Score:5, Interesting)
What I always wondered is why the OpenGL people don't promote a two-level standard; the low-level is OpenGL as it exists now, the second level of the standard would be optional. and consist of the kinds of things that Direct3D/Quickdraw3D would have offered, higher level things. The second-level standard would be implemented on *top* of the first level standard, meaning it would be as portable as the base is and not provide a roadblock to changes in creating new opengl versions. Something like Mesa.
Is this an attractive idea, or do the present existence of third-party libraries that sit on top of opengl make such an idea irrelivant? Even if so, it seems a "standard" higher-level library for opengl could take out one of the big complaints of Direct3D programmers ("OpenGL is too much work!")
Let me know if anything i've said here is wrong; I've followed the Direct3D/OpenGL argument but have personally done nothing more complex than some simple GLUT applications. (And I didn't even get enough into GLUT to see to what extent it functions as a higher-level 'cover' API for OpenGL..)
Re:What I always wondered (Score:5, Insightful)
Yep, Direct3D does try to bring in the programmer at the higher level, and it does limit the programmer if they're programming anything other than games. That's because Direct3D (and DirectX in general) is meant for games in the first place; other media-intensive apps are somewhat secondary, although they can be done in DirectX, and for that particular application you have DirectShow (which used to be separate from DirectX, FYI, but is now part of it. I don't know why, but Microsoft said so.)
I think Direct3D is more of that two-level standard you're trying to elaborate on. In fact, for a while Direct3D really is defined like that--it used to have a "retained mode" for high-level apps and an "immediate mode" for low-level ones. (They've since unified it into one immediate mode and did away with retained mode altogether.)
The primary users of OpenGL, on the other hand, have been on the field for ages already, which means that they probably have all of the higher level stuff as part of their intellectual property. Using another vendor's API for what they already have is generally a dumb thing to do (lost productivity due to the fact that most of their apps are written in their old, working libraries already, and rewriting them in Direct3DX is tedious and error-prone). Besides, OpenGL clearly defined itself as a standard for displaying high-performance graphics, and helping the programmer on his other tasks (representing models, parameterizing the effects he can do in his engine, etc.) isn't really part of that goal.
Re:What I always wondered (Score:1)
Open Source being Open Source I imagine there are five or ten contenders already floating around- anyone know which are best of breed for BSDish Licencing and GPL Licencing respectively?
Re:What I always wondered (Score:3, Insightful)
SDL could be considered one, it handles OpenGL windows.
PLIB (.sf.net) is a game library, it contains some feaures that assist games developers (a GUI, fonts, scene graph, utility fns).
OpenSceneGraph is rather good.
and a host more - use google to find them, or search through SourceForge, or even the OpenGL page has a set of links.
As for the licence - most of these are LGPL, which I think is the most appropriate licence for a library.
Re:What I always wondered (Score:2)
Re:What I always wondered (Score:1)
What I always wondered is why the OpenGL people don't promote a two-level standard; the low-level is OpenGL as it exists now, the second level of the standard would be optional.
That is what GLU is for, and GLUT does some job too.
Re:What I always wondered (Score:4, Insightful)
Re:What I always wondered (Score:2)
That would be silly. Why not give people a choice of what to use on the second level? Especially if it's implemented in terms of the first level, it really doesn't matter what you use.
OGL 2.0... (Score:4, Informative)
- What still remains before we can say OpenGL is back toward its original goal (you write for one standard instead of having to write for every single little card driver, something kind of ruined by the fact that many things these days, every card uses a different opengl "extention" to do the exact same goal.)
- What still remains that DirectX excels at that OpenGL is lagging behind at
I don't think pt.1 has really been lost, unless you are doing really cutting edge stuff you can use OpenGL pretty happily as is. Many scientific applications are actually coded to Performer which works just fine on OpenGL 1.0. I've written lots of stuff, some just a couple years ago, that used plain immediate mode OGL 1.0, with a switch added later on for vertex arrays.
What remains is the vertex and pixel shaders, these will be in 2.0. They are already pretty much supported with the nv FX and I guess the 3D Labs card. I haven't been programming the ATI card, though many have for it's speed advantage, but from what I understand it doesn't quite live up to the requirements of 2.0. Also I think 3D Labs is pressing for infinite length programs, this can be implemented in the driver by simply compiling to multiple passes implicitly, though who knows about the performance. But the nv would handily beat the ATI if you do this because it can natively handle pretty long instruction streams. Unless this is already a driver trick, I dunno.
2.0 will almost certainly wait until ATI is ready on the hardware level at least. If you program to extensions...OpenGL is ahead of Direct X, but this means you are stuck with the vendor if you use their specific stuff, say using fp30 Cg on the FX. I think everyone pretty much does program to extensions and not the standard if they are doing cutting edge stuff, usually with a compile or run-time code switch based on the extensions present.
Re:So: (Score:1)
Agreed.
- What still remains that DirectX excels at that OpenGL is lagging behind at
Totally disagree. D3D9 is currently 'ahead' of OpenGL, with it's unified shader system, effect file framework,
Re:So: (Score:2)
You realize, of course, that HLSL and Cg are the exact same language, right? Microsoft helped nVidia develop Cg, and then renamed it to HLSL for the DX9 implementation.
OpenGL + Perl (Score:4, Informative)
Shader language (Score:1)
Indeed (Score:5, Informative)
The OpenML [khronos.org] SDK is an alpha release and the final spec for it won't be out until about this time next year.
However, the Khronos Group also released the OpenGL ES [khronos.org] spec and there's actually a couple implementations already available. OpenGL ES is for embedded systems and mobile devices, it's essentially just a subset of OpenGL. Seems like it might be pretty nifty...
Step toward OpenGL 2.0 (Score:5, Informative)
When OpenGL 1.0 was initially proposed, it provided a standard implementation for fixed pipeline segments (with the idea that individual implementations could selectively choose which pieces of the pipe would be implemented in software, and which would be implemented in hardware). This was a very significant development, as it meant that everyone could operate with the same set of rules, and could do the same things, but those without hardware support may suffer some performance penalties (of course, with modern CPUs, some of the stages in the pipeline can have very high-perf implementations in software).
Since then, the rules have changed significantly. Hardware developers have started to suggest that the behaviour of the individual components of the pipeline could be programmable. NVidia and ATI have already responded to this, providing a wide variety of programmable shader technologies (e.g. programmable vertex and pixel shaders). I understand that OpenGL 1.5 essentially brings this level of programmability up to current levels (I think that DirectX 9.0 does the same thing, but I would love for someone to correct me on this. I haven't touched DirectX in a while, so I'm a little rusty. In fact, at the pace that hardware is evolving, I'm actually very rusty, and likely collapsing due to decay.) OpenGL 2.0 extends this idea of programmability to every stage of the pipeline. For most current video cards, this means that a lot of that programmability has to be implemented in software (which is essentially what people are doing anyway. If you want to implement programmable textures, you write software that interfaces with your video card's static texture routines.) 3DLabs is hoping to turn the computer graphics world on its ear by providing almost completely programmable graphics cards. Nearly every stage of the pipeline should be programmable in hardware. Of course, we will have to wait to see what they deliver, but I imagine that even if their cards suck ass in terms of performance (or they'll be targetted to the super high-end, so most of us will never see them), they should offer some features that will force some new developments from ATI and NVidia, which will eventually make their way down to regular consumers.
It's good that OpenGL 1.5 is out, to help keep OpenGL on the map of standards (especially since DirectX is a really inconvenient standard for those of us who don't run Windows), but I'm still pretty psyched for the release of OpenGL 2.0, w00t!
Re:Step toward OpenGL 2.0 (Score:2)
Okay, so here's for you, not a correction but more like a clarification: DirectX has had this functionallity since at least DirectX 8. For game developers, these things aren't "new and exciting" anymore, they're things you need to make a new game. So - this issue of not having a standard interface to programmable shaders in opengl has been a big factors in making people move to DirectX for a while now, so thi
OGL 1.5 - Legal Issues (Score:5, Interesting)
Re:OGL 1.5 - Legal Issues (Score:2)
Unfortunately, no-one on the DRI mailing list responded the last few times I asked there...
Re:OGL 1.5 - Legal Issues (Score:1)
Look here [sgi.com] for the document on GL_ARB_vertex_program and here [sgi.com] for the document on GL_ARB_fragment_program. Specifically, look under IP Status.
Hardware shaders definitely have legal entanglements.
Re:OGL 1.5 - Legal Issues (Score:2)
How will this affect Doom 3? (Score:1, Interesting)
Official Java bindings, finally (Score:3, Insightful)
Re: (Score:2)
Re:Official Java bindings, finally (Score:4, Informative)
Re:Official Java bindings, finally (Score:1)
GL4Java
SDL4Java (has an OpenGL binding)
development speed is critical (Score:5, Interesting)
Re:development speed is critical (Score:3, Informative)
One of its key features is that the Java bindings are automatically generated from the C OpenGL bindings, so it's pretty trivial to keep it up to date with the very latest changes Compare this to gl4Java which has really started falling out of date.
What happened to Fahrenheit? (Score:2, Interesting)
Re:What happened to Fahrenheit? (Score:3, Insightful)
Microsoft said that Farenheit would be the new standard 3D API and replacement for DirectX, and they managed to get SGI on board on it, which of course was the only purpose.
Remember that at this time DirectX was hopefully behind OpenGL, and Microsoft needed to make sure that OpeNGL development came to a standstill for a year or so while they were improving DirectX. After they had suceeded with th
OpenAL (Score:2, Interesting)
Do Linux game developers (or anyone at all) use OpenAL nowadays for environmental sound effects? Is it any good in its present state? It seems the website www.openal.org [openal.org] hasn't been updated since 2002. Well, most of the stuff seems to be from 2001...
Re:OpenAL (Score:1)
Remote OpenGL apps (Score:1)
I know that people have done this with SGI kit, but most commodity X drivers don't seem to support remote OpenGL and there is precious little information around.
Has anyone tried this?
- Brian
Re:Remote OpenGL apps (Score:4, Informative)
Re:Remote OpenGL apps (glx) (Score:1)
TimeZone
Official Open[GL,AL] bindings for Java (Score:3, Informative)
http://games-core.dev.java.net/
Here's a great example of using OpenGL/OpenAL under Win32/Linux written in Java.
(It uses the LWJGL - which is an OpenGL/OpenAL Java wrapper that uses nio).
http://www.puppygames.net/
Tremendously slow progress (Score:2, Interesting)
Too bad about Fahrenheit (Score:2)
Anyone remember Fahrenheit? The collabrative effort between SGI and Microsoft to redesign both API's, into a new, leaner more capable common a
Re:Great (Score:5, Informative)
Oh, not necessarily. The OGL shading language is just a high-level version of the shading extensions that have previously existed. I'm pretty sure drivers will adapt by simply compiling the code before passing it to the card. The other extensions mentioned (like the ARB_vertex_buffer_object and ARB_occlusion_query) have been extensions to 1.4 for a while now, and my GeForce FX 5600 supports them already.
Now if the cards can accept the high-level language itself... that would be interesting (and perhaps will make for a bigger, hotter video card...)
Unless you already bought one (Score:3, Informative)
Sure, if you already HAVE a fancy-schmancy GeForceFX/Radeon 9500+ level card. For previous-generation hardware, you might get very simple shaders to work, but for more complex shaders that require looping, data-dependant branching, overbright float pixels etc, you're still gonna need new hardware :-) Even earlier hardware, well, tough - you might get vertex shaders if you're lucky.
Now if the cards can accept the high-level l
Re:Unless you already bought one (Score:4, Informative)
I think this concerns those guys who got the top-of-the-line cards just now, anyway. *Of course* they'll be concerned if the card they just bought becomes useless this soon. I'm pointing out that it's (probably) not the case with OpenGL 1.5 -- it seems that most of it can be implemented on existing 1.4 hardware. I'm not so sure though; haven't read the specs yet.
> No chance
That's because x86 uses a general instruction set. The reason why graphics cards have attained a faster rate of speedup than Moore's Law is because graphics uses very specialized instructions, with a lot of room for optimization.
We'll probably never get the syntactical form of the high-level language understood on the card anytime soon, but for the bytecode form of the high-level language (ala Java), it's a very good possibility... and it's less ambitious (more feasible) than that of the Java-chips....
Re:Great (Score:1)
> ARB_vertex_buffer_object and ARB_occlusion_query)
> have been extensions to 1.4 for a while now, and
> my GeForce FX 5600 supports them already.
ARB_vertex_buffer_object, maybe but ARB_occlusion_query is new (though I think it is a direct promotion from NV_occlusion_query)
Re:Open GL? (Score:1, Redundant)
Re:Open GL? (Score:1)
Google [google.com] is a wonderful thing