Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Entertainment

Rendering Shrek@Home? 345

JimCricket writes "There's an interesting piece at Download Aborted about using distributed computing (a la SETI@Home, Grid.org, etc.) in the film industry. With the recent release of Shrek 2, which required a massive amount of CPU time to complete, one must wonder why the film industry doesn't solicit help from their fans. I'd gladly trade some spare CPU time in exchange for the coolness of seeing a few frames of Shrek 3 rendered on my screensaver!"
This discussion has been archived. No new comments can be posted.

Rendering Shrek@Home?

Comments Filter:
  • by jbellis ( 142590 ) * <jonathan@carDEBI ... com minus distro> on Wednesday May 26, 2004 @01:13PM (#9260124) Homepage
    Easy: Pixar and Dreamworks have both developped highly proprietary rendering technology. They're not about to just give copies to everyone who wants one. Even if the renderer itself weren't reverse-engineered, which isn't beyond the realm of possibility, it would likely be far easier to decipher the protocol used and voila, a functioning copy of [Pixar|Dreamworks]'s renderer.

    Lobotomizing it to the point where this wouldn't be useful would probably make it useless for distributing the workload as well.
  • by Picass0 ( 147474 ) on Wednesday May 26, 2004 @01:23PM (#9260253) Homepage Journal

    Both studios are using Renderman compliant renderers, so that's not the issue.

    And there's no reason that any one machine has to render an entire image file. You could have any node build N number of scanlines and send the packet back home.

    The risk would be someone running a port monitor on the return address, and re-assembling digital image files.
  • by vasqzr ( 619165 ) <vasqzr@noSpaM.netscape.net> on Wednesday May 26, 2004 @01:24PM (#9260267)

    There'd be no sound.

    I'm sure people would sit through it anyway, though.
  • by Obasan ( 28761 ) on Wednesday May 26, 2004 @01:28PM (#9260315)
    Only the most high end of machines could even consider attempting to render even one layer of a frame for this kind of animation. We're talking systems with 2-4GB of RAM as a minimum (preferably 4+) and the scene files/textures would weigh in the tens to thousands of megabytes that must be downloaded for each scene. Think uncompressed TIFF or TARGA texture files that might be 5000x5000 at 40 bits/pixel.

    Even on high end machines they often do not render a full frame, but a layer of a frame which is then composited with other layers into the full frame. Why? Many reasons but one of them is that even the high end machines don't have enough RAM and the render would take too long (the machine would need to swap).

    So aside from the issues of fans returning bogus data, or extracting highly proprietary information out of the client as other threads have mentioned, this would be a real show stopper. Breaking the problem into small enough pieces to be handled by joe-blow's computer would be prohibitive and require tons of calculations to figure out which pieces of textures are actually required for a given piece of rendering etc. It would probably require a compute farm just to manage it!

    Rendering is also a lot more complex than you might think, there are render wranglers who manage the rendering queues and look at the outputs... many renders may require specific versions of the rendering software, so a frame that rendered with 2.7.2.1 won't render anymore without errors with 2.7.2.2... so many copies of the software are managed in parallel with the wranglers helping to clean up the errors. How would you manage this in a distributed client environment?

    Furthermore most of the proprietary rendering apps are certified against VERY specific platforms, eg. one specific kernel version and build level, specific versions of shared libraries etc.

    Long and short is there's a reason why movies cost millions. :)
  • by unteins ( 778119 ) on Wednesday May 26, 2004 @01:28PM (#9260320)
    Actually, from what I know of Pixar's renderer, it actually wouldn't be that difficult to do something like this. For starters, Renderman can be purchased. Secondly, it uses technology that was formerly Open Source (Blue Moon Render Tools) so it isn't like it is totally proprietary. RIB files are pretty big though, so the data would become a problem. I think you could send just deltas though from the last frame rendered if you were tracking, RIB is just a text file, so it wouldn't even be too hard. Renderman, if I recall correctly, is a bucket renderer, which means that each frame is subdivided into many subframes which are rendered and then assembled. It would be possible to only send the subframes to the distributed network and do the frame assembly back at the studio. This would mean your machine might render Buzz Lightyears elbow, but you're not going to get to see a whole lot of the scene. Trying to hunt down all the little chunks of one frame and then assemble the frames into movies would be even more difficult for a pirate. Now, the shders that Pixar uses might be a bit of a problem for them to release, but then again, by the time the movie goes into final rendering, the technology in the film is a few years old, so it isn't like they'd lose a lot of ground. Besides, a lot of these techniques are either implementations of SIGGRAPH papers, or are presented in papers at SIGGRAPH after they are created. I think the only MAJOR concern is the tampering with the output. I don't think there is anyway to safeguard that (you could encrypt it, but that still leaves plenty of holes in the system, you could always hack the output buffer in memory, etc.) The main problem of course being that the only way to see if a frame wasn't tampered with would be to compare it to a render of the frame....and well, then what is the point....
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Wednesday May 26, 2004 @01:32PM (#9260359) Homepage Journal
    At Pixar, distributed rendering, even within the same building, was sometimes I/O bound rather than compute-bound. The problem is that high-quality rendering requires a lot of textures, some of which are algorithmic (a function in shading language) and some photographic (a big image file, probably not compressed because compression would add artifacts). So, you have to ship a lot of files around to render a scene. Photographic textures may be large because of the scale problem. They may have to look right through a 100:1 zoom, as objects move and the camera viewpoint changes.

    This is not just tracing all of the rays in the scene.

    Bruce

  • by Anonymous Coward on Wednesday May 26, 2004 @01:41PM (#9260443)
    I believe you should take a look at the real figures before you compare computing resources.

    http://setiathome2.ssl.berkeley.edu/totals.html

    SETI@Home averaged 72.27 TeraFLOPs / sec in the last 24 hours, I check often, and seeing a 55-70 range is normal.

    And these are the top 5 'supercomputers' in the world:

    http://www.top500.org/

    At:

    1) 35.87 TeraFlops
    2) 13.88
    3) 10.28
    4) 9.819
    5) 8.633

    Now SETI = 72 TFlops/sec

    For raw power, there isnt a server farm out there than can rival 5M+ users.
  • by 1984 ( 56406 ) on Wednesday May 26, 2004 @01:44PM (#9260467)
    They aren't "blow your mind" servers. Think PC-based hardware. A lot of servers, yes, but no special rocket science. The only high-end (ish) thing about the render clients is that they usually have plenty of RAM, from 1.5GB to 4GB each.

    The network, too, isn't going to be anything as exotic as 10Gb/s. In fact the only single component that's really high-end is the storage -- a lot of data, and hundreds of clients accessing it simulataneously.

    I work at an effects shop not a million miles from Pixar, and our rendering is done on a few hundred Athlons, some dedicated and some user workstations. Pixar is much bigger, and they have much more horsepower, but it's not orders of magnitude stuff.

    I think SETI@Home is probably a long way ahead in raw aggregate CPU performance. Probably less far ahead in memory surface (but still ahead). But you couldn't use SETI@Home for a reason mentioned by another poster in this thread: bandwidth to storage. The render pipeline has a lot of I/O in it, and your distributed clients would be forever waiting to read or write from network-distant storage. Efficiency would suck, and reliability, too.

    Even if you could do it, you wouldn't for issues of information security (which someone else mentioned here, too.)
  • by NanoGator ( 522640 ) on Wednesday May 26, 2004 @01:47PM (#9260491) Homepage Journal
    "what's proposed here is just does not seem possible under low bandwidth conditions. it's not like you can just run off to computer #2,398 and say "go render frame 1,503" -- there are textures and models and state information that probably total somewhere on the order of gigabytes (give or take a factor of ten) in order to render that frame."

    I can give you a little data here. Take a look at this image [reflectionsoldiers.com] I made. The scene is roughly 1.5 million polygons, and virtually everything is textured. The folder containing the bare bones version of this scene is roughly 600 megabytes. I could probably cut that size in half via JPEG etc, but we're still looking at a massive amount of data to send to one person to render a frame. I know this because I seriously discussed sharing the rendering with a friend of mine on the east coast. We both felt it'd take longer to ship than it would to render.

    I doubt this scene is anything close to what they were doing in Shrek 2, let alone whatever will happen with 3.
  • Rendering time (Score:5, Informative)

    by Gitcho ( 761501 ) on Wednesday May 26, 2004 @01:56PM (#9260579)
    How would you ever reproduce this on a distributed network of limited bandwidth home PC's ? Here's some LOTR rendering stats from Wired.com - [http://www.wired.com/wired/archive/11.12/play.htm l?pg=2]

    ... The Return of the King, which opens in theaters December 17, will feature almost 50 percent more f/x shots than The Two Towers and will be composed of more data than the first two movies combined. Churning out scenes like the destruction of Barad-dûr and the Battle of Pelennor Fields (with thousands of bloodthirsty CG Orcs) took 3,200 processors running at teraflop speeds through 10-gig pipes - that's one epic renderwall. What else went into making Frodo's quest look so good? By Weta's account, more than you might think.

    WETA BY THE NUMBERS

    HUMANPOWER
    IT staff: 35
    Visual f/x staff: 420

    HARDWARE
    Equipment rooms: 5
    Desktop computers: 600
    Servers in renderwall: 1,600
    Processors (total): 3,200
    Processors added 10 weeks before movie wrapped: 1,000
    Time it took to get additional processors up and running: 2 weeks
    Network switches: 10
    Speed of network: 10 gigabits (100 times faster than most)
    Temperature of equipment rooms: 76 degrees
    Fahrenheit Weight of air conditioners needed to maintain that temperature: 1/2 ton

    STORAGE
    Disk: 60 terabytes
    Near online: 72 terabytes
    Digital backup tape: 0.5 petabyte (equal to 50,000 DVDs)

    OUTPUT
    Number of f/x shots: 1,400
    Minimum number of frames per shot: 240
    Average time to render one frame: 2 hours
    Longest time: 2 days
    Total screen time of f/x shots: 2 hours
    Total length of film: Rumored to be 3.5 hours
    Production time: 9 months

  • by wurp ( 51446 ) on Wednesday May 26, 2004 @01:57PM (#9260590) Homepage
    in the details, although perhaps not in the final answer. I would be very surprised if the actual input data is anywhere near that huge - do you think someone (or some group of people) actually did enough work to generate that many bits (and that's not counting the order of magnitude greater work done on things that got thrown away)?

    What is much more likely is that the grass, skin, hair, etc. is described by some relatively simple input parameters from which millions of polygons are generated. The "rendering" process almost certainly includes generating the polygons from raw input data and seeded random number generators through perlin noise distribution routines through fractal instantiation through spline generation through polys to a rendered frame as the final product.

    However, much of that work would only have to be done once, then shots taken from different angles on the resulting textured polygon structure, whereas on a distributed architecture any info that isn't sent to your machine would have to be regenerated for your machine. Not to mention that memory requirements are likely to be pretty darn high.
  • Frame rendering? (Score:2, Informative)

    by monkeyhitman ( 783024 ) on Wednesday May 26, 2004 @02:07PM (#9260683)
    When making CG films like Shrek (or any CG done in great detail), each scene is not rendered wholesale, but rather done by layers. So, the backgroud might be rendered by this set of computers by this part of the farm, while another renders lighting, etc., or that they're rendered in different sessions. This adds another issue to the problems for distributed movie rendering -- compositing. A compositor needs all the different layers from a particular scene to tweak and play with, of course, so the compositor would have to wait for the animators to finish a scene, (nevermind the data needed for rendering) send out all the work units needed to render all the layers, then wait for alllllll the WU to complete, quality-check all the WU, and then start composition work. And then it's entirely possible that a compositor might ask for a layer to be re-rendered because it just didn't work out and changes were needed to make the scene look better. Rendering is NOT the final step to complex CG animation, so even if distributed computing can somehow work, it would only hurt the production team.
  • by Naffer ( 720686 ) on Wednesday May 26, 2004 @02:20PM (#9260859) Journal
    I'm pretty sure appleseed is already out in theatres in japan. We're just waiting for it to get released on DVD so it can be fansubbed. Of course, it would be really nice of em to release the jp DVD with an english subtitle track so people could buy the original.
    By the way, heres the trailer for appleseed. Quite a beautiful CG animation to behold. apple.co.jp trailer [apple.co.jp]
  • Sounds like imp.org (Score:5, Informative)

    by ron_ivi ( 607351 ) <sdotno@cheapcomp ... m ['ces' in gap]> on Wednesday May 26, 2004 @02:23PM (#9260895)
    The Internet Movie Project [imp.org] has its renderfarm software on sourceforge [sourceforge.net]

    My big question is why would you rather donate to a large commercial organization well funded from it's previous Shreck flick -- rather than donate the cycles to a project like the IMP works themselves?

  • by Whalou ( 721698 ) on Wednesday May 26, 2004 @02:43PM (#9261068)
    Renderman is not a renderer it is a specification for interoperability between modeling tools and renderer (like XMI for software engineering tools except that it works.)

    Pixar's renderer is actually PRMan.

    From Renderman.org [renderman.org]:
    There are a lot of people when you hear them talking about RenderMan and how great the images are from it, etc. They are most likely really talking about Pixar's PhotoRealistic RenderMan® (PRMan).
    RenderMan is actually a technical specification for interfacing between modeling and rendering programs. From 1998 until 2000 the published RenderMan Interface Specification was known as Version 3.1. In 2000 Pixar published a new specification, Version 3.2. Coming soon Version 3.3
  • by tolldog ( 1571 ) on Wednesday May 26, 2004 @02:53PM (#9261163) Homepage Journal
    From first hand experience... this won't happen, not for a long long time, if at all.

    We used thousands of processors to render. We had terabytes of storage. It is a large undertaking. Every single frame and element of the frame had to be tracked. It had to be qualified. If something didn't work, we had to diagnose the system and get it back up and running. This is something that is too large of budget for a home brew system to work.

    With other distributed systems, there are some checks and balances on the data ran, a way to know if you are sending back somewhat good data. The only way you can tell with this is to visually inspect the end result. If a person has a system that returns a bad slice of a frame, you now have to recreate that slice and track it, because its possible the problem is in the code, in the data files or it was a one time glitch with the system. Not a fun thing to do for hundreds of remote systems that aren't similar.

    Render time also varies. It can be 5 minutes to 12+ hours. If a job gets halted, you lose that data, and have to recreate it. This isn't like generating millions of keys. There isn't a second init time before turning out data. At a previous studio, we had scene load times of over 30 minutes before it even started rendering. That needs to be accounted for in how you split up frames. If you have 30 minutes to load (after 45 minutes to download the data) and only render for an hours worth, you are getting a heavy hit on over head.

    There are just too many issues with this working in a current setup. Stick to crunching numbers.

    -Tim
  • Too much speculation (Score:2, Informative)

    by technoviper ( 595945 ) <technoviperx&yahoo,com> on Wednesday May 26, 2004 @03:02PM (#9261245)
    I work in a visual effects/video production shop as system administrator, so i know firsthand how much data is required to do a simple animated spot/visual effects shot. An average 30 second spot for TV (which is about 1/4th the resolution of film) takes around 200 to 400 GB of data. Our network is all GigE based, with Terabytes of Fibre Channel storage. Even with all the power available to us (Mac G5's and Intel Xeon based rendering systems) it takes forever to pump out the frames for a spot. In short theres no way people can crunch this kind of workload without having to download gigs of information first. With tight project deadline s and tons of protected intellectual property at stake this kind of work is best kept in house. you can check out our web site here [d-kitchen.com]
  • by anti_analog ( 305106 ) on Wednesday May 26, 2004 @03:11PM (#9261321)
    I believe I saw someone earlier mention how there can be terabytes of data go into a single frame of CGI film, and these days that can be pretty correct.

    A .rib file or similar type file for PDI's renderer will probably contain a few million polygons and/or a few hundred thousand control verticies for implicit surfaces such as nurbs and sub-Ds, which can be a lot of data (my scene files at work average 4-5 million polygons and are about 150 megs on average, saved in a binary file format). And, that doesn't include particles, procedurals, all the motion data so that proper motion blur can be calculated...

    And then the textures... They do use lots of procedurals, but they also use lots of 16 bit per channel textures of 4000x4000 for face textures, or even higher. Some people are using tiles if 16 bit tiffs for displacement maps now that equate to like a 100,000x100,000 image for displacement maps, because the accuracy requirements for close up renders are so bloody high. That can be many many gigs of data there.

    And, if you're raytracing like in Shrek 2, then you need to have as much of that data in RAM at once, or else render time spirals out of sensibility, unlike scanline renderman where swapping is easier, because the rays bouncing throughout the scene make scene divisions more difficult (but still possible).
    I work with 4 gigs of RAM and we can just barely render 6 million polygons + a few 4k displacement maps all raytraced at once (in windows unfortunately). And, when we render sequences and stuff, we often almost kill our network because distributing all this data to just 20-30 rendernodes is pretty tough (and how would that scale to a big renderfarm with thousands of rendernodes...)

    So, yeah, like everyone else is saying, bandwidth limitations and that people running the screen saver probably don't have the hardware and OS to really run 4+ gigs of RAM, this Shrek@home idea seems rather unlikely. It would be cool though, if it worked...

    Hooray for my totally unoriginal post!
  • Too Huge A Job (Score:4, Informative)

    by Caraig ( 186934 ) on Wednesday May 26, 2004 @03:34PM (#9261521)
    Rendering a movie is more than just handing PoVRAY a set of data and telling it to render. Distributed computing will not be able to handle it for a lot of reasons.

    First off, what is rendered by the computer is not what you see on screen. There are perhaps a dozen object layers that are rendered individually and composited in the postproduction phase. So, for example, Shrek might exist on one layer, the donkey on another, the ground on a third, some foreground objects on a fourth, several layers of background objects on the fifth through tenth, et cetera.

    Now, each object layer will also be split into several render layers, for color, shadows, specularity, reflectivity, transparency, and probably several others that I can't think of right now. It is not an exaggeration to say that a single frame of a completely CGI scene can be made up of upwards of fifty individual frames, all composited together in post.

    Why is this done? First off because it's easier to edit and change one of these layers and re-render it, than to change and re-render the entire scene. If Shrek is too gruesomely gleaming, but Donkey is just fine, you just have to edit Shrek's specular layer. This is easilly done in any professional postproduction software package. Alternatively, if it's completely wrong, you just have to re-render that specific layer -- saves a LOT of time! Some post tools are extremely powerful, which makes rendering to object/render layers very appealing.

    Now, while you could conceivably do Shrek@Home, you would need a fairly large render program -- and you're already distributing a very powerful program, which the people who wrote it would be very uncomfortable doing. Secondly, the processing power in even high-end PCs is going to be jack compared to what they have in render farms, and they have a lot more of those computers besides. Rendering is very processor-intensive, too. It's a complex mathematical process that can take hours. Many computers will chug along at 99% load on the processor because they HAVE to.

    Add to the fact the stake in the heart of this idea: that the producers want reliability first and formost. An in-house render farm, or even renting time at a farm (an idea I've sometimes played with) that signs and seals and delivers is going to be reliable and dependable or they will know exactly whose head needs to roll. If you start having half the internet rendering the million or so frames of your blockbuster, who do you hold accountable when the deadline comes and you're short 1000 various random frames?
  • Need rapidly fading (Score:4, Informative)

    by Dark Bard ( 627623 ) on Wednesday May 26, 2004 @03:51PM (#9261700)
    I work in the industry and the need for large render farms is going away soon. Workstation level video cards are capible of rendering scenes at or near real time. The big problem has been software support. The first commercial product to do this is already on the market. It's called Gelato. It comes from NVidia and works with most of their workstation level cards. It'll take a few years for the new technology to settle and support all the animation packages and operating systems but eventually everyone will have some form of card support for rendering. Each artist will simply render the final scene at their workstation. The two biggest technical problems, rendering and storage, are rapidly becoming nonissues.

Always draw your curves, then plot your reading.

Working...