Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software

Nvidia's Dave Kirk Explains The Point of Cg 17

An anonymous roward writes "This interview at ZDNet UK has Dave Kirk talking about how Nvidia's Cg programming language will bring movie-making and game-writing together. 'This is a big step towards convergence -- not films and movies and games being the same, but the way people create them being the same. Artists can use the same skills on both. Cg is almost guaranteed to be efficient in hardware, and any Renderman program can be translated to Cg, by hand or by a tool that someone's developing. Once that happens, all the moviemaking can take place in Cg.'"
This discussion has been archived. No new comments can be posted.

Nvidia's Dave Kirk Explains The Point of Cg

Comments Filter:
  • Will the programs that incorporate Cg have to be completely written in Cg? Can Cg be used as a library or will it have to be the base language of the program?

    The interviewer could have been clearer on some technical points, but I guess it's managers that they are aiming at being on ZD and all. If they were interested in talking to engineers they could have done the interview with /.
    • Will the programs that incorporate Cg have to be completely written in Cg?

      Comments in the article [slashdot.org]that announced Cg suggest that it's more like a C-like language for programming the graphics card. Alternatives would be to use OpenGL or DirectX or (shudder) the assembly language for the card's processor chip.

      Even though the interview mentions the magic word "Renderman", I don't think this will have much of an impact on movie-making. Well, maybe the low-budget stuff will move to it. But not the likes of Pixar or Dreamworks. Even with the speed of graphics hardware doubling every six months (which I agree is very impressive), we would still need eight or nine years of hardware improvements to render a movie in real time that currently takes two hours per frame.*

      OK, it doesn't have to be real-time to be useful. If it's significantly faster than a general-purpose CPU, that could sell it. I suppose it could still be useful for preview renders, where you're trying to check that the animation flows smoothly before you start hammering the render farm. But when you come to do the rendering that your audience will see, you'll want the best quality that you can get, within the constraints of your schedule and your budget.

      I've done a few (very short) animated films on the computer, and my experience is that no matter how many bogomips you have, your rendering time per frame stays constant. This is because as the hardware gets faster, I soak up the extra power not by making longer films or more films, but by adding more polygons, more complex textures, more subtle lighting effects, and so on. So the number of frames stays more or less constant, but the complexity of the image in each frame goes up.

      Once the computer became fast enough to render a picture quicker than I could draw it by hand, I became the bottleneck. Making the computer faster doesn't let me write the script faster, or animate a character faster, or even move the mouse faster.

      Perhaps what's more likely to happen with Cg is that NVidia go to Pixar in a few years and say "we have a chip here that can render a movie like Monsters Inc in real time". Pixar then buy 300 of them and throws enough polygons and lighting effects at them that they still take two hours to render each frame...


      * Real time for film = 24 fps, two hours = 7200 seconds, so 24 * 7200 means that two hours per frame is 172,800 times slower than real time. 172,800 is partway between 2^17 and 2^18, so you need 17 or 18 doublings to accelerate this much. A doubling every six months means 8 or 9 years have to pass.

  • Okay, I think I'm a bit confused here.

    They say the Cg compiler will "output either DirectX or OpenGL." and "that Any place where those two run, Cg will run."

    Does this mean that it will work anywhere either DirectX or OpenGL work, or that it will only work where that both of those run it will work? I would naturally assume the former but then there's the part that says

    What happens is the compiler reads the specification of the hardware from DirectX, works out what capabilities are and creates code that runs well on that hardware.

    It sounds to me like this makes Cg dependant on DirectX, as opposed to simply supporting it.

    They want "to be open and flexible and take away all the reasons not to go with Cg" but I would imagine a dependance on DirectX would constitute a possible reason not to go with Cg.

    So is DirectX neccesary just for hardware detection features, or is there a service in OpenGL that Cg will use for the same purpose?

    And now to conjecture wildly.
    It sounded a bit like nVidia has the the right intentions, but is afraid that Microsoft will cause trouble if it looks like OpenGl is given equal support in Cg.
    • AFAIK:

      OpenGL has to support every feature in spec, but if the hw doesn't support it then it is done by software (driver). DX is different because hw don't have support every feature and software 'emulation' isn't required.
  • "Not the same code base, but it's the same language specification, as C is C then Cg is Cg."

    And we find with Microsoft's Visual C++, they do not support some common C++ standards natively. Should we expect Microsoft's Cg variant to be non-complient with nVidia's Cg standard too?
  • Ain't gonna happen (Score:4, Informative)

    by Screaming Lunatic ( 526975 ) on Wednesday July 03, 2002 @12:52PM (#3815177) Homepage
    Don't expect Cg to be picked up by the movie industry. Renderman is a much more general shading language and everybody is already used to it.

    On the desktop/video game side, it doesn't seem like it has a great chance to survive. OpenGL2.0 is a much more general language. With Doom3 having an OpenGL2.0 rendering pipeline, it makes Cg a little less ubiqutous as well. There will be tons of games that will be built on top of the Doom3 engine just like there was on top of the Quake engines.

    Cg also needs the other IHVs, such as ATI, Matrox and 3DLabs, to write back ends for the Cg compiler. That's probably not gonna happen.

    ATI is behind in nVidia in driver development and it doesn't look like they have the manpower to devote to a Cg back end. Plus there are rumours that they are following nVidia and moving to a United Driver Architecture. (Hopefully this means good Linux drivers from ATI). I don't see ATI having the manpower to undertake both projects.

    3DLabs is pretty devoted to OpenGL2.0. They need this to survive as a company. Their cards are used quite often on *nix workstations. They can't afford to have OpenGL die.

    And Matrox...who really cares about Matrox. They haven't done sh1t in a while. Sure they had the dual head cards, but now you can get dual head cards from other IHVs. And they still haven't put out a card with a programable pipeline.

    So, all I can see for Cg at the is that it will replace NVParse. It is kinda nice to write one shader and then translate it to D3D and OpenGL. Cg is a good short term fix, but not a good long term vision. OpenGL1.0 was forward thinking and it turned out to be a good, stable API for 10 years, unlike some other APIs, *cough*D3D*cough*. Hopefully, OpenGL2.0 will have the same staying power.

  • John Carmack has decided to go with OpenGL 2.0 over Cg for the backend of Doom 3.0, citing vendor neutrality.

    You can read about it at The Reg [theregister.co.uk] or straight from John [bluesnews.com]
  • Missing the point (Score:3, Interesting)

    by winchester ( 265873 ) on Wednesday July 03, 2002 @01:54PM (#3815823)
    It seems to me NVidia misses the point when it comes to Renderman. There are very good reasons why Renderman has been the industry standard for as long as computer-generated special effects are used in moviemakeing.

    First of all, there are millions and millions of lines of code that are generating or modifying RIB and SL. Entire toolchains are built around it. In other words... Renderman is there, proven and established. And when i say proven, i mean proven in the production environment of a motion picture.

    Second, as seen in several posts both here (in previous topics) and on usenet (search comp.graphics.rendering.renderman), real-time redering is currently not an option, and will probably not be an option for the forseeable future. The reason for this is simple... if the hardware or software gets more powerful, then the desire of the director to use that power to make things even more lifelike will also increase. Just look at Toy Story 2 vs. Toy Story... the scenes and textures are immensely more complex, resulting in a production time that wasn't significantly shorter, even though both motion pictures are quite some years apart.

    Third, Renderman is a VERY flexible tool to work with. You can do as good as everything you want when it comes to geometry, and when it comes to texturing, you can write almost every texture you can imagine in the Shading Language. You have to have worked the SL to fully appreciate the power and flexibility of it.

  • Cg is almost guaranteed to be efficient in hardware, and any Renderman program can be translated to Cg, by hand or by a tool that someone's developing. Once that happens, all the moviemaking can take place in Cg.
    Let's try that another way, shall we?
    Machine code is almost guaranteed to be efficient in hardware, and any C program can be translated to machine code, by hand or by a tool that someone's developing. Once that happens, all the development can take place in machine code.
    I don't think so. I don't know nothin 'bout graphics in the movie industry, but why wouldn't everyone continue to code in Renderman, and then translate it automatically to Gc at the end?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...