Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Media Movies Hardware

Nvidia Releases Hardware-Accelerated Film Renderer 251

snowtigger writes "The day we'll be doing movie rendering in hardware has come: Nvidia today released Gelato, a hardware rendering solution for movie production with some advanced rendering features: displacement, motion blur, raytracing, flexible shading and lighting, a C++ interface for plugins and integration, plus lots of other goodies used in television and movie production. It will be nice to see how this will compete against the software rendering solutions used today. And it runs under Linux too, so we might be seeing more Linux rendering clusters in the future =)" Gelato is proprietary (and pricey), which makes me wonder: is there any Free software capable of exploiting the general computing power of modern video cards?
This discussion has been archived. No new comments can be posted.

Nvidia Releases Hardware-Accelerated Film Renderer

Comments Filter:
  • by Anonymous Coward on Tuesday April 20, 2004 @05:45AM (#8914536)
    Sadly, the hardware accelerations that consumer 3D graphics cards do aren't useful for the high quality renderings that are needed for film and television. The needs of games are just different, parially because of the need to render in realtime. So I doubt whether there's much scope for free software to make use of them for that purpose...
  • by Anonymous Coward on Tuesday April 20, 2004 @05:45AM (#8914538)
    The rumor on the street is that a Soho based SFX house tried this when they had a deadline that standard software rendering couldn't meet.

    So they wrote an engine to do renderman->OpenGL and ran it across many boxes.

    Problem was that they got random rendering artefacts by rendering on different cards - different colors etc, and couldn't figure out why.

    When working on one box they got controlled results, but only had the power of one renderer.
  • Re:I like this... (Score:2, Interesting)

    by Mister Coffee ( 771513 ) on Tuesday April 20, 2004 @05:47AM (#8914544)
    It does not happen often that a hardware manufacturer has Linux support before it has Windows support. At least I have never seen it before.
  • by mcbridematt ( 544099 ) on Tuesday April 20, 2004 @06:00AM (#8914591) Homepage Journal
    But, NVIDIA's Quadro lineup *ARE* PCB Hacked consumer cards. Some PCI ID(or BIOS for the NV3x cards) hacking can get you a Quadro out of a GeForce easily, minus the extra video memory present on the Quadro's. I've done this heaps of times with my GeForce4 Ti 4200 8x (to a Quadro 780 XGL and even a 980 XGL) and I believe people have done it with the NV3x/FX cards as well.

    This film renderer is different. It uses the GPU and CPU together as powerful floating point processors (not sure if gelato does anything more than that).
  • by WARM3CH ( 662028 ) on Tuesday April 20, 2004 @06:04AM (#8914598)
    Actually, there has been reports of using such hardwares to produce the similar results of the high-end, software based methods like those used in films. The trick is to break the job (typically the complex RenderMan shaders) to many passes, and feed them to the graphics card to process. By many passes, I mean 100~200 passes. The outcome will be like rendering a frame in a few seconds (we're not talking about real-time renderings here) which is MUCH faster the software based approaches. The limit in the past was that the color representaion inside the GPUs used a small number of bits per channel and by having a lots of passes on the data, round-off errors would degredate the quality of the results. But now, nVidia supports 32 bit floating point representaion for each color channel (i.e 128 bits per pixel for RGBA!) and this brings back the idea of using the GPU with many passes to complete the job. Please note that in the film and TV business, we're talking of large clusters of machines and weeks of rendering and bringing it down to days with smaller number of machines is a very big progress.
  • by XMunkki ( 533952 ) on Tuesday April 20, 2004 @06:13AM (#8914630) Homepage
    Problem was that they got random rendering artefacts by rendering on different cards - different colors etc, and couldn't figure out why.

    I have seen this problem in software renderers as well. The problem seemd to be that part of the rendering farm was running on different processors (some were Intel, some AMD and many different speeds and revs) and one of them supposedly had a little difficulty with high-precision floating points and it computed the images with a greenish tone. Took over a week to figure this one out.
  • ExLuna, take 2 :) (Score:2, Interesting)

    by Anonymous Coward on Tuesday April 20, 2004 @06:20AM (#8914653)
    Seems like ex-Exluna staff (bought by NVidia) is going to kick PRMan's a$$ on hardware level: they tried it on software level with Entropy, but got sued into oblivion by Pixar, now it's time for revenge?
  • M4 open GL VJtool. (Score:3, Interesting)

    by kop ( 122772 ) on Tuesday April 20, 2004 @06:21AM (#8914660)
    M4 [captainvideo.nl] is a free as in beer movieplayer/vj tool that uses the power of openGL to manipulate movies,images and text.
  • by CdBee ( 742846 ) on Tuesday April 20, 2004 @06:38AM (#8914709)
    is there any Free software capable of exploiting the general computing power of modern video cards?

    I expect that once it suddenly becomes clear that the GPU in a modern video card has serious processing power, that someone will release a version of the SETI@Home [berkeley.edu] client which can use the rendering engine as a processor. Bearing in mind that most computers use their GPU's for a very small percentage of their logged-in life, I suspect there is real potential for using it for analysing on distributed computing projects.
  • Re:'pricey' (Score:3, Interesting)

    by RupW ( 515653 ) * on Tuesday April 20, 2004 @06:56AM (#8914765)
    Professional grade equipment is all expensive.

    No, you can get raytracing hardware for less than the software and a Quadro FX would cost you.

    For example, there's the ART PURE P1800 card [artvps.com] which is a purpose-built raytracing accelerator. It's a mature product with an excellent featureset, speaks renderman and has good integration into all the usual 3d packages. It's generally acknowledged as a very fast piece of kit with excellent image quality, and plenty of quality/speed trade-off options. And if you've a deeper wallet they do much bigger network-appliance versions.
  • by DrSkwid ( 118965 ) on Tuesday April 20, 2004 @07:01AM (#8914777) Journal
    I used to see that with 3ds4 as well when I was rendering this [couk.com]. One was a pentium and one was a pentium pro.

    Ah those were the days. We were on a deadline and rendered it over Christmas. After four hours the disks would be full and it would be time to transfer it to the DPS-PVR. I spent six days where I couldn't leave the room for more than four hours, sleep included. Was pretty wild !

    VH1 viewers voted it second best CGI video of all time, behind Gabriel's Sledgehammer so I guess it was worth it!

  • by agby ( 303294 ) on Tuesday April 20, 2004 @07:27AM (#8914849)
    I was under the impression that it's hard to use a video card for general computing tasks because of the way that AGP is designed. It's really good at shunting massive amounts of data into the card (textures, geometry, lighting, etc) but terrible at getting a good data rate back into the computer. They're designed to take a load of data, process it and push the output back to the screen, not the processor. This is the major reason, IMHO.
  • by Anonymous Coward on Tuesday April 20, 2004 @07:59AM (#8914984)
    That was true a couple of years ago, however Geforce 6800 and Quadro FX 4000 have support for:

    - 32 bit per component precision in the pixel shader
    - dynamic branches in the pixel and vertex shaders
    - unlimited instruction count for pixel and vertex shader programs

    These features are very useful, they make possible to render frames using GPU at the full quality. Considering that GPUs have HUGE amount of processing power, this will make the rendering much faster.

    For example, above mentioned new GPUs from NVidia have 32 FPUs in the pixel pipeline only. Each of those FPUs can perform 4 FP operations at 32bit precision per clock. That's 128 operations per clock or 51.2 GFLOPS in the pixel shader alone.
  • by invispace ( 159213 ) on Tuesday April 20, 2004 @08:22AM (#8915072)
    IF anyone's interested, the dino on the http://film.nvidia.com/page/gelato.html page was one of of Entropy's flagship images. Entropy was a pay-to-play renderer made under the renderman Spec by a they guy who wrote BMRT. Pixar sued the company that made both of the renderman compliant renderers, and basically forced them into business with Nvidia, who quickly snatched up the company and paid off Pixar. Nvidia had been trying to come up with a hardware shader language like that of renderman, and thusly came out with the shoddy and less than capable CG shader language. Unfortunately, no matter how good that card looks on screen, it's still only going to be a preview render. Straight 35mm film is rendered out at 2048x1556, and you wouldn't believe how tedious CG work is with every single person above you telling you to correct every little thing. The one thing this will do is help look-dev folk and shader-writers out. They get paid enough as is though. Oh.... you might be interested to know, that most of the renderfarms are now at least 1/2-2/3 x86 machines running Linux, and they have been for the past 3 years. No large studios are using SGIs anymore, but surprisingly a lot of the boutiques are using OSX. I guess that's what happens when Apple takes a hint from MS, and buys(Shake) what they want instead of making it.
  • Re:Teh horror !!! (Score:3, Interesting)

    by Hast ( 24833 ) on Tuesday April 20, 2004 @08:44AM (#8915236)
    Probably something you can find on the General Purpose GPU [gpgpu.org] site.

    I've toyed with shaders some and implemented a system for image processing on GPUs. Quite a lot of fun really, though we didn't do any comparisons with CPU to see how much faster it was. (That project isn't published anywhere though.)

  • Re:BURN!!!!!! (Score:3, Interesting)

    by Hast ( 24833 ) on Tuesday April 20, 2004 @08:47AM (#8915257)
    Play around some with pixel/vertex shaders, they are quite easy to get the hang of and plenty powerful. (Even if you don't have the latest and greatest gfx cards.)

    Could make a nice addition to GIMP (if there isn't one already).
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Tuesday April 20, 2004 @08:48AM (#8915266)
    Comment removed based on user account deletion
  • by pVoid ( 607584 ) on Tuesday April 20, 2004 @09:03AM (#8915371)
    Who are you?

    How do you know ISV's are getting annoyed? Do you go to lunch with ISV's every other day?

    Not only are you crudely generalizing, I think your point is actually not sound at all. You think Adobe cares about Microsoft dominating?

    The much more plausible explanation is that they (nVidia) already had the drivers/software for the architecture on linux (read previous posts, they bought the card).

    Another yet plausible explanation is that drivers are more difficult to implement in Windows (because of kernel mode constraints).

    And before people start flaming, I'll tell you right away:

    I've written drivers for both. A seg fault in a linux kernel driver will generate a message saying "oops, you segfaulted your kernel", whereas a seg fault, or even an invalid heap memory tag in Windows will instantly halt the system with a blue screen. Don't even get me started about IRQL's in Windows - they are essential for proper SMP multithreading, but really are very difficult to work with.

  • by saha ( 615847 ) on Tuesday April 20, 2004 @12:43PM (#8918274)
    First I'd like to see a greater selection of graphics hardware on the G5 machines. The Radeon 9800 are good, but a Quadro FX 4000 or 3000 would make selection better. I don't think Apple should try to get 3DLabs Wildcat boards on the G5s. Applications like Gelato running on OSX would give the platform a boost as well.

    For the kind of work I do, a Nvidia Quadro FX 3000G would be best for driving large displays.

  • by forkazoo ( 138186 ) <<wrosecrans> <at> <gmail.com>> on Tuesday April 20, 2004 @01:32PM (#8918977) Homepage
    I have mod points, and I really want to do a little bit of smacking down, but I'll just go for correcting instead. I phear the metamods, yo!

    ASHLI is *not* a renderer. It isn't anywhere near doing what Gelato does. Gelato takes a scene file, and gives you a picture. It does it very nicely, using motion blur, programmable shading, and all sorts of fun stuff like that. It is written by the Ex - Ex Luna boys. (Larry Gritz, Matt Pharr, Craig Colb -- Three mofos who know their shizzle.)

    ASHLI takes a renderman shader in RSL, and gives you a compiled shader for OpenGL or DirectX. It's then up to you to write the whole renderer.

    It's cool, but if you are seriously writing lots of RenderMan shaders, you can probably just as well write them in GL Slang, for your in house customised OpenGL renderer. ASHLI's utility is limited. Frankly, it is pretty much a complete non sequiter in a discussion about a renderer.

E = MC ** 2 +- 3db

Working...