Nvidia's Dave Kirk Explains The Point of Cg 17
An anonymous roward writes "This interview at ZDNet UK has Dave Kirk talking about how Nvidia's Cg programming language will bring movie-making and game-writing together. 'This is a big step towards convergence -- not films and movies and games being the same, but the way people create them being the same. Artists can use the same skills on both. Cg is almost guaranteed to be efficient in hardware, and any Renderman program can be translated to Cg, by hand or by a tool that someone's developing. Once that happens, all the moviemaking can take place in Cg.'"
Sounds great, but... (Score:1)
The interviewer could have been clearer on some technical points, but I guess it's managers that they are aiming at being on ZD and all. If they were interested in talking to engineers they could have done the interview with
Re:Sounds great, but... (Score:2, Insightful)
Comments in the article [slashdot.org]that announced Cg suggest that it's more like a C-like language for programming the graphics card. Alternatives would be to use OpenGL or DirectX or (shudder) the assembly language for the card's processor chip.
Even though the interview mentions the magic word "Renderman", I don't think this will have much of an impact on movie-making. Well, maybe the low-budget stuff will move to it. But not the likes of Pixar or Dreamworks. Even with the speed of graphics hardware doubling every six months (which I agree is very impressive), we would still need eight or nine years of hardware improvements to render a movie in real time that currently takes two hours per frame.*
OK, it doesn't have to be real-time to be useful. If it's significantly faster than a general-purpose CPU, that could sell it. I suppose it could still be useful for preview renders, where you're trying to check that the animation flows smoothly before you start hammering the render farm. But when you come to do the rendering that your audience will see, you'll want the best quality that you can get, within the constraints of your schedule and your budget.
I've done a few (very short) animated films on the computer, and my experience is that no matter how many bogomips you have, your rendering time per frame stays constant. This is because as the hardware gets faster, I soak up the extra power not by making longer films or more films, but by adding more polygons, more complex textures, more subtle lighting effects, and so on. So the number of frames stays more or less constant, but the complexity of the image in each frame goes up.
Once the computer became fast enough to render a picture quicker than I could draw it by hand, I became the bottleneck. Making the computer faster doesn't let me write the script faster, or animate a character faster, or even move the mouse faster.
Perhaps what's more likely to happen with Cg is that NVidia go to Pixar in a few years and say "we have a chip here that can render a movie like Monsters Inc in real time". Pixar then buy 300 of them and throws enough polygons and lighting effects at them that they still take two hours to render each frame...
* Real time for film = 24 fps, two hours = 7200 seconds, so 24 * 7200 means that two hours per frame is 172,800 times slower than real time. 172,800 is partway between 2^17 and 2^18, so you need 17 or 18 doublings to accelerate this much. A doubling every six months means 8 or 9 years have to pass.
Opengl or not? (Score:1)
They say the Cg compiler will "output either DirectX or OpenGL." and "that Any place where those two run, Cg will run."
Does this mean that it will work anywhere either DirectX or OpenGL work, or that it will only work where that both of those run it will work? I would naturally assume the former but then there's the part that says
What happens is the compiler reads the specification of the hardware from DirectX, works out what capabilities are and creates code that runs well on that hardware.
It sounds to me like this makes Cg dependant on DirectX, as opposed to simply supporting it.
They want "to be open and flexible and take away all the reasons not to go with Cg" but I would imagine a dependance on DirectX would constitute a possible reason not to go with Cg.
So is DirectX neccesary just for hardware detection features, or is there a service in OpenGL that Cg will use for the same purpose?
And now to conjecture wildly.
It sounded a bit like nVidia has the the right intentions, but is afraid that Microsoft will cause trouble if it looks like OpenGl is given equal support in Cg.
Re:Opengl or not? (Score:1)
OpenGL has to support every feature in spec, but if the hw doesn't support it then it is done by software (driver). DX is different because hw don't have support every feature and software 'emulation' isn't required.
Should we expect non-compliance then? (Score:1)
And we find with Microsoft's Visual C++, they do not support some common C++ standards natively. Should we expect Microsoft's Cg variant to be non-complient with nVidia's Cg standard too?
Ain't gonna happen (Score:4, Informative)
On the desktop/video game side, it doesn't seem like it has a great chance to survive. OpenGL2.0 is a much more general language. With Doom3 having an OpenGL2.0 rendering pipeline, it makes Cg a little less ubiqutous as well. There will be tons of games that will be built on top of the Doom3 engine just like there was on top of the Quake engines.
Cg also needs the other IHVs, such as ATI, Matrox and 3DLabs, to write back ends for the Cg compiler. That's probably not gonna happen.
ATI is behind in nVidia in driver development and it doesn't look like they have the manpower to devote to a Cg back end. Plus there are rumours that they are following nVidia and moving to a United Driver Architecture. (Hopefully this means good Linux drivers from ATI). I don't see ATI having the manpower to undertake both projects.
3DLabs is pretty devoted to OpenGL2.0. They need this to survive as a company. Their cards are used quite often on *nix workstations. They can't afford to have OpenGL die.
And Matrox...who really cares about Matrox. They haven't done sh1t in a while. Sure they had the dual head cards, but now you can get dual head cards from other IHVs. And they still haven't put out a card with a programable pipeline.
So, all I can see for Cg at the is that it will replace NVParse. It is kinda nice to write one shader and then translate it to D3D and OpenGL. Cg is a good short term fix, but not a good long term vision. OpenGL1.0 was forward thinking and it turned out to be a good, stable API for 10 years, unlike some other APIs, *cough*D3D*cough*. Hopefully, OpenGL2.0 will have the same staying power.
On a related note... (Score:1)
You can read about it at The Reg [theregister.co.uk] or straight from John [bluesnews.com]
Missing the point (Score:3, Interesting)
First of all, there are millions and millions of lines of code that are generating or modifying RIB and SL. Entire toolchains are built around it. In other words... Renderman is there, proven and established. And when i say proven, i mean proven in the production environment of a motion picture.
Second, as seen in several posts both here (in previous topics) and on usenet (search comp.graphics.rendering.renderman), real-time redering is currently not an option, and will probably not be an option for the forseeable future. The reason for this is simple... if the hardware or software gets more powerful, then the desire of the director to use that power to make things even more lifelike will also increase. Just look at Toy Story 2 vs. Toy Story... the scenes and textures are immensely more complex, resulting in a production time that wasn't significantly shorter, even though both motion pictures are quite some years apart.
Third, Renderman is a VERY flexible tool to work with. You can do as good as everything you want when it comes to geometry, and when it comes to texturing, you can write almost every texture you can imagine in the Shading Language. You have to have worked the SL to fully appreciate the power and flexibility of it.
All moviemaking in Cg? (Score:2)