Nvidia Releases Hardware-Accelerated Film Renderer 251
snowtigger writes "The day we'll be doing movie rendering in hardware has come: Nvidia today released Gelato, a hardware rendering solution for movie production with some advanced rendering features: displacement, motion blur, raytracing, flexible shading and lighting, a C++ interface for plugins and integration, plus lots of other goodies used in television and movie production. It will be nice to see how this will compete against the software rendering solutions used today. And it runs under Linux too, so we might be seeing more Linux rendering clusters in the future =)" Gelato is proprietary (and pricey), which makes me wonder: is there any Free software capable of exploiting the general computing power of modern video cards?
Spelling... (Score:4, Funny)
Gelato seems to be correct...
Nice to see some good out of BMRT/Exluna. (Score:5, Informative)
For those who don't remember, BMRT was a really cool RenderMan based renderer that Pixar had some sort of love/hate relationship with. IIRC, they used it, yet they sued the company. At the end nVidia bought them, though it wasn't clear why at the time.
Re:Nice to see some good out of BMRT/Exluna. (Score:5, Informative)
Interestingly, BMRT was free as in $$$ but not as in Free Software. This was one of the first software packages where I first recognized how big this distinction is. (A free as in Free Software program probably would have continued on as people may have coded around some of the disputed intellectual property - a free as in $$$ program was possible to kill with the carrot and stick of a lawsuit and buyout opportunity)
Is a beowulf cluster of blue moon ray tracers... (Score:2)
Seems to be Open now? (Score:4, Informative)
I like this... (Score:5, Funny)
Operating System
* RedHat Linux 7.2 or higher
* Windows XP (coming soon)
Re:I like this... (Score:2, Interesting)
I think it'll start happening a lot more (Score:4, Insightful)
Re:I think it'll start happening a lot more (Score:3, Interesting)
How do you know ISV's are getting annoyed? Do you go to lunch with ISV's every other day?
Not only are you crudely generalizing, I think your point is actually not sound at all. You think Adobe cares about Microsoft dominating?
The much more plausible explanation is that they (nVidia) already had the drivers/software for the architecture on linux (read previous posts, they bought the card).
Another yet plausible explanation is that drivers are more difficult to implement in Windows (becau
Re:I think it'll start happening a lot more (Score:4, Insightful)
How do you know ISV's are getting annoyed? Do you go to lunch with ISV's every other day?
No, but working for a medium-sized ISV who deals with Microsoft (we buy bulk embedded XP licenses for use in custom gaming machines), I can tell you a few things about how Microsoft deals with customers. They have actually tried to offer us better deals if we discontinued our Linux solutions and marginalized our dealings with our Russian partners who produce hardware and software for use with Mandrake Linux 9.x in gaming solutions. (Sounds impossible? Think again). I can only imagine how much more underhanded Microsoft are when dealing with bigger ISVs.
Not only are you crudely generalizing, I think your point is actually not sound at all. You think Adobe cares about Microsoft dominating?
I'm sure Netscape and Sun didn't care either, until Microsoft took them out of the market. You are really insulting the intelligence of the Adobe executives if you think that they haven't considered this possibility or what they could do to avoid something similar happening.
Re:I like this... (Score:3, Informative)
Look at the AMD 64 [devx.com] ("Opteron", etc) CPU. Linux support is here, but native versions of Microsoft Windows are still yet to be released.
New Headline: (Score:3, Funny)
Re:I like this... (Score:2)
Re:I like this... (Score:2, Funny)
Re:I like this... (Score:3, Insightful)
3D graphics cards aren't relevant (Score:2, Interesting)
Re:3D graphics cards aren't relevant (Score:3, Insightful)
Re:3D graphics cards aren't relevant (Score:5, Interesting)
This film renderer is different. It uses the GPU and CPU together as powerful floating point processors (not sure if gelato does anything more than that).
Re:3D graphics cards aren't relevant (Score:3, Informative)
A couple of years ago I got a GeForce4 4800 and a Quadro4 900 XGL. I performed the required resistor mod and flashed the GeForce4 with the Quadro4's BIOS.
Sure the GeForce4 got recognised as a Quadro4 900 XGL in the Windows display control panel, but when you run benchmarks like SPECViewPerf it was obvious the modded-GeForce4 did NOT perform like a real Quadro4 900 XGL. Capabilities like the HW-accelerated clip planes did not see
Re:3D graphics cards aren't relevant (Score:4, Informative)
Re:3D graphics cards aren't relevant (Score:5, Interesting)
Re:3D graphics cards aren't relevant (Score:2)
Re:3D graphics cards aren't relevant (Score:3, Insightful)
Besides, you don't need killer bus bandwidth with this because you're not trying to pump out 100fps using a couple of hundred megs of geometry and textures on a card with only a hundred or so megs of memory. (That means you have to send loads of data over the bus 100 times each second.)
The powe
Re:3D graphics cards aren't relevant (Score:2, Informative)
http://www.pinnaclesys.com/ProductPage_n.asp?Prod
for details
Re:3D graphics cards aren't relevant (Score:2, Interesting)
- 32 bit per component precision in the pixel shader
- dynamic branches in the pixel and vertex shaders
- unlimited instruction count for pixel and vertex shader programs
These features are very useful, they make possible to render frames using GPU at the full quality. Considering that GPUs have HUGE amount of processing power, this will make the rendering much faster.
For example, above mentioned new GPUs from NVid
Rendering artefacts between cards? (Score:5, Interesting)
So they wrote an engine to do renderman->OpenGL and ran it across many boxes.
Problem was that they got random rendering artefacts by rendering on different cards - different colors etc, and couldn't figure out why.
When working on one box they got controlled results, but only had the power of one renderer.
Re:Rendering artefacts between cards? (Score:4, Interesting)
I have seen this problem in software renderers as well. The problem seemd to be that part of the rendering farm was running on different processors (some were Intel, some AMD and many different speeds and revs) and one of them supposedly had a little difficulty with high-precision floating points and it computed the images with a greenish tone. Took over a week to figure this one out.
Re:Rendering artefacts between cards? (Score:5, Interesting)
Ah those were the days. We were on a deadline and rendered it over Christmas. After four hours the disks would be full and it would be time to transfer it to the DPS-PVR. I spent six days where I couldn't leave the room for more than four hours, sleep included. Was pretty wild !
VH1 viewers voted it second best CGI video of all time, behind Gabriel's Sledgehammer so I guess it was worth it!
Solution: Use one model of card, one driver rev (Score:2)
Re:Solution: Use one model of card, one driver rev (Score:2)
Yes, but it's not a good idea to tie one's business to a particular hardware company let alone one product, and one driver.
Also, there are many people who make the money decisions that will balk at making particular changes
Re:Solution: Use one model of card, one driver rev (Score:2)
Re:Rendering artefacts between cards? (Score:2)
Fab for machinima (Score:5, Informative)
Re:Fab for machinima (Score:3, Insightful)
The opening quote of the article poster is ignorant. Movie rendering has been done in hardware forever. He seems to be mixing up doing rendering in hardware with rendering on the fly in a video card.
What we have here is a slight mix of the two, but by no means anything new on the market. Its only letting you use your quadro if you already have one for movie rendering acceleration. I certainly would not buy one for this purpose. I imagine its still in
Re:Fab for machinima (Score:3, Flamebait)
You're the ignorant one. Movie rendering has been done on the CPU forever. This for the first time is doing final movie rendering on the GPU.
This is definitely something new on the market. Point me to another product that does final movie rendering with hardware acceleration provided by the GPU, and I'll eat my hat.
I imagine its still incredibly more profitable to use a CPU than GPU.
Why? Because it's faster? Bzzzt. That's the whole point. Take a look at the transistor counts for the latest
Eat some gelato (Score:2, Informative)
Quick question... (Score:4, Funny)
Well, since they released "a C++ interface for plugins and integration" for Gelato (ice cream in Italian, btw), this probably means that free software can (and, eventually, will) support all these high-end functions... or am I completely wrong?
For instance, just imagine Blender with a Gelato plug-in for rendering... hmmmm... Now I understand why they named it "Gelato"...
Don't you mean.. (Score:2)
the problem is in the Bus (Score:5, Insightful)
Re:the problem is in the Bus (Score:2)
Re:the problem is in the Bus (Score:5, Insightful)
The purpose might mostly be to show people why they need to run out
and get PCI Express hardware; it completely addresses the assymetry
issue.
I'm guessing the main reason Gelato is spec'd to work on their
current AGP kit is to encourage the appearance of really impressive
benchmarks showing how much better performance is with PCI Express.
They have a good idea, and they're rolling it out at a good time,
I think.
Some folks were trying to do stuff like this with PS2 game consoles,
but I guess now they'll have more accessible toys to play with.
Re:the problem is in the Bus (Score:2, Insightful)
Only the second generation PCI Express cards from nVidia will be native solutions and will use the bridge the other way arround (to usa a PCI Express chip inside an AGP system).
Re:the problem is in the Bus (Score:2)
Re:the problem is in the Bus (Score:2)
However if the GPU can be left to crunch for most of the time and return say a row of pixels at a time I doubt the 1/10th speed of the AGP bus downstream would be a big problem. For complex scenes the input (textures, geometry, shaders) may well exceed the output (pixels) in terms of data.
Re:the problem is in the Bus (Score:2)
Not an issue, esp. for non-RT rendering (Score:5, Insightful)
For 3D rendering, especially non-realtime cinematic rendering, you have large source datasets - LOTS of geometry, huge textures, complex shaders - but a relatively small result. You also generally take long enough to render (seconds or even minutes, rather than fractions of a second) that the readback speed is not so much an issue.
Upload to the card is plenty fast enough (theoretical 2 GB/s, but achieved bandwidth is usually a lot less) to feed it the source data, if you're doing something intensive like global illumination (which will take a lot more time to render than the upload time). Readback speed (around 150 MB/s) is indeed a lot slower, but when your result is only e.g. 2048x1536x64 (FP16 OpenEXR format, 24 MB per image), you can typically read that back in 1/6 of a second. Not to say PCIe won't help, of course, in both cases.
Readback is more of an issue if you can't do a required processing stage on the GPU, and you have to retrieve the partially-complete image from the GPU, work on it, then send it back for more GPU processing etc, but with fairly generalised 32 bit float processing, you can usually get away with just using a different algorithm, even if it's less efficient, and keep it on the card.
Another issue might be running out of onboard RAM, but in most cases you can just dump source data instead & upload it again later.
Re:the problem is in the Bus (Score:2)
Fast enough for what exactly? What purpose does that defeat?
Compared to the cost of (eg) sending a frame across a LAN, the time taken to pull a frame across the bus would be utterly, utterly insignificant.
Teh horror !!! (Score:5, Insightful)
A company that wants to be payed for their work, weird !
You will see more, allot more, of this for the Linux platform in the near future.
Software may be released with source code, but no way that it will be released under GPL, most ISV's can't make a living releasing their work under GPL.
And please the "but you can provid consulting services" argument is not valid, it dont work that way in the real world.
Re:Teh horror !!! (Score:2, Insightful)
That just saved me the trouble of clicking the link and checking it out
ps: I recently visited a project trying to "harness the power of GPU's". I think that project was something like seti/folding/ud/... but tried to have all the calculations be made by the GPU of your 3d card.
If someone knows what I'm talking about: please post it, I can't find it anymore
Re:Teh horror !!! (Score:3, Interesting)
I've toyed with shaders some and implemented a system for image processing on GPUs. Quite a lot of fun really, though we didn't do any comparisons with CPU to see how much faster it was. (That project isn't published anywhere though.)
Re:Teh horror !!! (Score:2)
Have you donated to SlashDot? (Score:2)
If you haven't contributed financially to this blog then you're a serious hypocrite - aren't you? (-:<
Free-as-in-beer isn't absolutely necessary, but it does solve an awful lot of "parking meter change" style problems, and it's a highly viable services leader.
Linus gave Linux away, Nvidia benefitted. (Score:3, Insightful)
So why should Nvidia benefit from Linux, without some reciprical giving ? Hardware programming specs would be enough of a gift.
BURN!!!!!! (Score:2)
It would be really cool to have a hardware solution with Combustion like features for the price of After Effects.
Re:BURN!!!!!! (Score:3, Interesting)
Could make a nice addition to GIMP (if there isn't one already).
'pricey' (Score:5, Insightful)
I bet the type of people that buy this are like big time architects that have a few machines set up to do renders for clients, and want to perhaps do some additional effects for promo/confidence value, that likely already have people running that type of hardware.
Then again all those Quadro users could be CAD people and they've got no audience. =)
Re:'pricey' (Score:2)
This product is competing against other rendering engines like MentalRay, Vray, Renderman, etc. And at $2750 per license it's deffinately not for smaller developers or architects. There are plenty of other rendering engines out there that are significantly cheaper and dont require a video card that costs as much as an entire x86 rendernode.
Re:'pricey' - but worth it? (Score:3, Informative)
I think the point is not that it can render just like other engines, but that it can do so at a far greater speed (with a lot more flexibility and features than the PURE card). That would indeed be worth the money to all but the smallest studios - much faster feedback at full quality is an artist's dream, quite apart from the (more expensive) option of using it to accelerat
Re:'pricey' - but worth it? (Score:2)
Then you naturally have to build a Beowulf cluster of those.
You mean PCI Express? (Score:2)
PCI Express is a possibility, if you can find a motherboard with more than one x16 slot (or a GPU that fits an ordinary x1 slot). Doubt you will for some time, though.
And you wouldn't have to use top-of-the-line cards, either. Something mid-range, with more bang for the buck would do fine.
But yeah, one day. Great to see how the demand for better games has resulted in cool hw/sw like this flowing on to my own industry :-) Now we just need to figure out how th
Re:'pricey' (Score:3, Interesting)
No, you can get raytracing hardware for less than the software and a Quadro FX would cost you.
For example, there's the ART PURE P1800 card [artvps.com] which is a purpose-built raytracing accelerator. It's a mature product with an excellent featureset, speaks renderman and has good integration into all the usual 3d packages. It's generally acknowledged as a very fast piece of kit with excellent image quality, and plenty of quality/speed trade-off options. And if you've a
Re:'pricey' (Score:2)
Not just CAD - I do server-side Java programming, and we've all recently been bought new PCs. The spec we went for included a Quadro FX 500; don't ask me why, it just did... (it was that, or a similar machine with a GeForce - I didn't make the choice)
General computing on graphics hardware (Score:3, Informative)
This might also be interesting: GPGPU [gpgpu.org] /Arvid
Re:General computing on graphics hardware (Score:2)
Linux software (Score:5, Informative)
Take a look at the Jashaka [jashaka.com] project. It is a real time video editing suit and the designers have been working with and have supposedly been getting support from Nvidia, so they may have had access and I would imagine certainly will have access to these video cards. I can't imagine them not taking advantage of this technology.
The other nice thing is if memory serves me correctly this program is being designed to work on Windows, Linux and OS X, so good news all around.
Using GPU for signal processing (Score:5, Informative)
A quick Googling revealed the following:
- BrookGPU [stanford.edu]
- GPGPU [gpgpu.org]
Re:Using GPU for signal processing (Score:2)
ExLuna, take 2 :) (Score:2, Interesting)
M4 open GL VJtool. (Score:3, Interesting)
Free software is a product of a lifecycle (Score:4, Insightful)
"Moore's Law" is simply the application of this general law to hardware. But it applies also to software.
Free software is an expression of this cycle: at the point where the individual price paid by a group of developers to collaborate on a project falls below some amount (which is some function of a commercial license cost), they will naturally tend to produce a free version.
This is my theory, anyhow.
We can use this theory to predict where and how free software will be developed: there must be a market (i.e. enough developers who need it to also make it) and the technology required to build it must be itself very cheap (what I'd call 'zero-price').
History is full of examples of this: every large scale free software domain is backed by technologies and tools that themselves have fallen into the zero-price domain.
Thus we can ask: what technology is needed to build products like Gelato, and how close is this coming to the zero-price domain?
Incidentally, a corollary of this theory is that _all_ software domains will eventually fall into the zero-price domain.
And a second corollary is that this process can be mapped and predicted to some extent.
Re:Free software is a product of a lifecycle (Score:2)
Since these underlying technologies have been zero-priced since the 1980's (mainly thanks to Unix), the OS as a technology has indeed fallen into the zero-price domain as well.
In other words: a small team can today build a product that competes fairly well with Windows, using off-the
GPU as 2nd processor (slightly offtopic) (Score:3, Interesting)
I expect that once it suddenly becomes clear that the GPU in a modern video card has serious processing power, that someone will release a version of the SETI@Home [berkeley.edu] client which can use the rendering engine as a processor. Bearing in mind that most computers use their GPU's for a very small percentage of their logged-in life, I suspect there is real potential for using it for analysing on distributed computing projects.
Absolutely (Score:2, Informative)
Check out www.jahshaka.com. It's an open source video compositing / FX package that leverages the 3D accelerator chip on your graphics card to do incredible things. This is one to watch, it's definitely going places.
You can download binaries for linux and windows (and MAC), and source tarballs are available for the savvy.
I know, it's not strictly a "renderer", but it employs many of the fuctions of a renderer to create realtime effects and transitions.
Little value... (Score:3, Insightful)
Renderman is proven technology and has been so since the early '90s. Renderman is well known, its results are predictable and it is a fast renderer. Also, current production pipelines are optimised for Renderman.
UNIX and Linux are quite good when it comes to distributed environments (can anyone say Render Farm?) and handle large file sizes well (Think a 2k by 2k image file, large RIB files).
And last but not least, renderman is available with a source code license.
Hardware accelerated film rendering is in essence nothing but processor operations, some memory to hold objects and some I/O stuff to get the source files and output the film images. Please explain to me why a dedicated rendering device from NVidia would be any better than your average UNIX or Linux machine? Correct, there aren't any advantages, only disadvantages. (More expensive, proprietary hardware, unproven etc.)
Re:Little value... (Score:2)
Why do you think 3D hardware exists at all, when all it's doing is a load of integer maths? Surely a Linux machine is capable of adding numbers together, right? Obviously, dedicated hardware is faster, sh*tloads faster. Durr. The benefits to the artist's equivalent of the compile-edit-debug cycle are fairly obvious here, and worth rejigging the production pipeline to accomodate.
You're kidding, right? (Score:4, Informative)
PRMan is a fine product, but it has its limitations, as well as its price. There are numerous competitors, many of which use the same Renderman interface but offer more speed and/or more features at a lower price (BMRT and Entropy are[were] notable, and relevant, until Pixar squashed them with the threat of an expensive court case). Brazil, AIR, etc - these RIB-based renderers drop into the same place in the workflow.
Please explain to me why a dedicated rendering device from NVidia would be any better than your average UNIX or Linux machine?
Only if you explain why your average UNIX or Linux machine is better than a Commodore 64 or a PDA, which is also "in essence nothing but processor operations" etc :-) If you listed SPEED in there, you're on the right track.
A modern GPU has far more floating-point hardware than any general-purpose CPU, and it's all geared towards the process of rendering pixels. For certain tasks, one of those expensive dedicated rendering devices from nVidia could be better than FIFTY of your "average" UNIX or Linux machines! Is that enough of an advantage to consider?
Dang, I went and fed the troll, didn't I...
Re:Little value... (Score:4, Informative)
And, apparently, orders of magnitude faster.
Personally, I'd put that rather firmly into the advantage column, and for a number of reasons. You could either render your movie with a smaller farm (always a plus) or you could render even more complex scenes in the same time period--which is probably what most people would use this technology for. On the commentary track of Monsters Inc, the guys from Pixar note that despite having MUCH faster hardware (and alot more of it) the average time to render a single frame of Monsters Inc was just as long as a single frame of Toy Story. Why? Because the frames were FAR more complex.
I think this is a Good Thing(tm) at least for the people who have the imagination to use it.
Re:Little value... (Score:2)
2) Renderman is by no means alone in the FX world. MentalRay and Brazil most immediatly come to mind. 3) As to UNIX and linux being quite good in a distributed environment... well duh... so are Windows and Mac boxes. As to file sizes... with the limit of 4gb on 32bit boxes, thats a pretty damned big file, and thats active in memory size limit. As to with files, the limit is more in terrabytes. As to Renderfarms, almost ever renderer under the sun handles this the same
Re:Little value... (Score:2)
Free software ready indeed! (Score:5, Informative)
Re: (Score:3, Interesting)
Re:Free software ready indeed! (Score:2)
And, the major studios want: Speed, quality and a good clean API(so they can add their own stuff too)
Re:Free software ready indeed! (Score:2)
In general, you're right - nobody does ray tracing for final renders, but Blue Sky is the exception that proves the rule.
math coprocssor (Score:3, Insightful)
Does this mean that I could eventually use my GeForce to do things like matrix inversion for me?
Re: (Score:3, Informative)
round and round we go (Score:2, Informative)
Video Cards as Renderers (Score:4, Interesting)
Re:Video Cards as Renderers (Score:2, Informative)
Yes, but it's not a problem (Score:2)
General computing tasks, yes, AGP and GPUs are not so useful. For rendering though, they're near-ideal, even with AGP (though PCI Express would certainly be better).
Heeeey.. that dino looks familiar... (Score:2, Interesting)
But ATI's solution is free (Score:2, Informative)
Re:But ATI's solution is free (Score:3, Interesting)
ASHLI is *not* a renderer. It isn't anywhere near doing what Gelato does. Gelato takes a scene file, and gives you a picture. It does it very nicely, using motion blur, programmable shading, and all sorts of fun stuff like that. It is written by the Ex - Ex Luna boys. (Larry Gritz, Matt Pharr, Craig Colb -- Three mofos who know their shizzle.)
ASHLI takes a rende
Gelato is Software (Score:2)
Wow, it's only 2004 (Score:2)
Crypto Tool? (Score:2)
Key to this doctrine of no compromises is the nature of how Gelato uses the NVIDIA Quadro FX GPU. Instead of just using the native 3D engine in the GPU, as done in games, Gelato also uses the hardware as a second floating point processor.
WOW. That would be a fast FPU, I'm supposing. How fast can it sieve?
Pan
Sounds like an old (Lucasfilm?) Siggraph paper. (Score:3, Insightful)
That sounds like an old Siggraph presentation I saw a decade or two ago when I used to go to Siggraph. Lucasfilm, I think. (The fine sample picture in the article showed a motion-blured image of a set of pool balls in motion.)
When rendering an image using raytracing, there are several effects that are achieved by similar over-rendering processes. I.e. you ray-trace several times varying a paramter:
- Depth-of-field (use different points on the iris of the "camera", blurring things at different distances from the "focal plane".)
- Diffuse shadows (use different points on the diffuse light source(s) when computing the illumination of a point.)
- Motion blur (use different positions for the objects and "camera", evenly {or randomly} distributed along their paths during the "exposure" - ideally pick the positions of the whole set of objects by picking several intermediate times, rather than picking the postion of each object separately, to avoid artifacts of improper position combinations.)
- Anti-aliased edges. (Pick different points in the pixel when computing whether you hit or missed the object or which color patch of its texture you hit.)
As I recall there were about five effects that worked similarly, but I don't recall the other(s?) just now.
To do any one of them requires rendering the frame N times {for some N} with the parameter varied, then averaging the frames. (Eight times might be typical.) Naively, to do them all would require N**5 renderings - 32,768 raytracings of the frame to do all five.
The insight was to realize that the effects could be computed SIMULTANEOUSLY. Pseudorandomly pick one of the N from each effect's set for each frame and only render N frames, rather than N**5. Eight is a LOT smaller than 32K. B-)
Sounds like Nvidia ported this hack to the firmware for their accellerator.
Re:This would be more useful (Score:5, Informative)
Re:This would be more useful (Score:4, Informative)
Maya, Houdini and XSI are all available for Linux, and they work well.
Re:This would be more useful (Score:2)
Re:This would be more useful (Score:2, Informative)