Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Entertainment

Rendering Shrek@Home? 345

JimCricket writes "There's an interesting piece at Download Aborted about using distributed computing (a la SETI@Home, Grid.org, etc.) in the film industry. With the recent release of Shrek 2, which required a massive amount of CPU time to complete, one must wonder why the film industry doesn't solicit help from their fans. I'd gladly trade some spare CPU time in exchange for the coolness of seeing a few frames of Shrek 3 rendered on my screensaver!"
This discussion has been archived. No new comments can be posted.

Rendering Shrek@Home?

Comments Filter:
  • by Zweistein_42 ( 753978 ) * on Wednesday May 26, 2004 @12:12PM (#9260111) Homepage
    Security issues would be a concern I'm sure. There's plenty of hackers who'd see no harm in, for example, extracting a number of images from around the world and sticthing a trailer, etc. And of course, rendering is a "trial-and-error" process - would they want people to have access to broken scenes? Or deleted scenes? Speculation would seriously dampen their ability to control marketing and release info. On the technical side, farms are reliable and predictable. Who can figure out how many fans will keep their computers up tonight for the critical preview tomorrow? What about the decline of interest after first little while? Distributed computing of this sort isn't well suited for commercial projects with fixed schedules. Not that I don't think it'd be COOL... I just don't think it'll happen :-/
    • Also, there is bound to be a fan site created that would allow users to upload their rendered images and somebody would manage to piece it together into a halfway coherent movie. Then some nerd would mystery science theatre 3000 it and it would become an internet phenomenon. Hmmm, maybe that's not a bad thing...
    • by frenetic3 ( 166950 ) * <houstonNO@SPAMalum.mit.edu> on Wednesday May 26, 2004 @12:20PM (#9260216) Homepage Journal
      I'm not a film tech -- but besides abuse and security issues, what's proposed here is just does not seem possible under low bandwidth conditions. it's not like you can just run off to computer #2,398 and say "go render frame 1,503" -- there are textures and models and state information that probably total somewhere on the order of gigabytes (give or take a factor of ten) in order to render that frame. Joe Dialup isn't going to be able to handle that; the film studios I'm sure have crazy fiber/multi-gigabit interconnects within their rendering farms.

      If they could find a way to offload some intermediate calculations (like deformations of hair or fabric or something that can be used as an intermediate result in a scene) then that might be a clever use for a distributed.net [distributed.net] style technique.

      -fren
      • by joib ( 70841 ) on Wednesday May 26, 2004 @12:37PM (#9260409)

        the film studios I'm sure have crazy fiber/multi-gigabit interconnects within their rendering farms.


        While the amount of data to move around probably is too much for dialup, gigabit ethernet is certainly fast enough, and dirt cheap as it's integrated on motherboards. If you look at the top500 list, you see that weta digital (the company which did the CG for lord of the rings IIRC) has a couple of clusters on the list, and they have gig ethernet.

        Basically, while rendering is cpu-intensive it's not latency sensitive, so there's no point in blowing a huge amount of cash on a high end cluster interconnect.
        • Sony Imageworks definitely didn't think gigabit was fast enough, and that was 6-7 years ago when I talked to them. They were deploying some sort of customized super-HIPPI setup for shipping their digital assets around.
        • by Rothron the Wise ( 171030 ) on Wednesday May 26, 2004 @02:18PM (#9261381)
          I've been unable to dig up the reference, but I read in an article about Pixar's "Monster's Inc." that for some frames it took longer to load the geometry than actually rendering the frame.

          SETI and Folding@Home work because of the massive asymmetry between the amount of data and the CPU power required, and although you _perhaps_ could find subtasks that could easilly be "offsourced" so to speak, that made sense performance wise, I very much doubt that it would interface very nicely with the way the artists work, or make any sort of economic sense.
      • by NanoGator ( 522640 ) on Wednesday May 26, 2004 @12:47PM (#9260491) Homepage Journal
        "what's proposed here is just does not seem possible under low bandwidth conditions. it's not like you can just run off to computer #2,398 and say "go render frame 1,503" -- there are textures and models and state information that probably total somewhere on the order of gigabytes (give or take a factor of ten) in order to render that frame."

        I can give you a little data here. Take a look at this image [reflectionsoldiers.com] I made. The scene is roughly 1.5 million polygons, and virtually everything is textured. The folder containing the bare bones version of this scene is roughly 600 megabytes. I could probably cut that size in half via JPEG etc, but we're still looking at a massive amount of data to send to one person to render a frame. I know this because I seriously discussed sharing the rendering with a friend of mine on the east coast. We both felt it'd take longer to ship than it would to render.

        I doubt this scene is anything close to what they were doing in Shrek 2, let alone whatever will happen with 3.
        • by wurp ( 51446 ) on Wednesday May 26, 2004 @12:57PM (#9260590) Homepage
          in the details, although perhaps not in the final answer. I would be very surprised if the actual input data is anywhere near that huge - do you think someone (or some group of people) actually did enough work to generate that many bits (and that's not counting the order of magnitude greater work done on things that got thrown away)?

          What is much more likely is that the grass, skin, hair, etc. is described by some relatively simple input parameters from which millions of polygons are generated. The "rendering" process almost certainly includes generating the polygons from raw input data and seeded random number generators through perlin noise distribution routines through fractal instantiation through spline generation through polys to a rendered frame as the final product.

          However, much of that work would only have to be done once, then shots taken from different angles on the resulting textured polygon structure, whereas on a distributed architecture any info that isn't sent to your machine would have to be regenerated for your machine. Not to mention that memory requirements are likely to be pretty darn high.
          • by NanoGator ( 522640 ) on Wednesday May 26, 2004 @01:04PM (#9260649) Homepage Journal
            "What is much more likely is that the grass, skin, hair, etc. is described by some relatively simple input parameters from which millions of polygons are generated."

            That's a very good point. Procedural elements of rendering could be distributed quite efficiently. Shrek 2 had some awesome smoke looking effects that I bet was very CPU intensive. That's exactly the type of thing that could be distributed.
    • by mirio ( 225059 )
      You make valid points although most (maybe all) of your points could be eliminated by having multiple hosts render the same frame (a la SETI's response to people uploading false data).
    • by swordboy ( 472941 ) on Wednesday May 26, 2004 @12:30PM (#9260341) Journal
      But this opens up a whole new world to the independents. Shrek2 just shattered all kinds of records [dailynews.com] in terms of cash. And there are no real actors.

      So what happens when a few talented indies get their paws on the processing power required to blow the doors off of convetional actors? It won't be goodbye to Hollywood just yet but I can't wait for the first CG/Anime crossover. I can't imagine how Cowboy Bebop would fare if it didn't have the cartoon stigma.
      • ...a whole new world (Score:5, Interesting)

        by steveha ( 103154 ) on Wednesday May 26, 2004 @12:54PM (#9260562) Homepage
        Shrek2 just shattered all kinds of records [...] And there are no real actors.

        You do still need voice actors. With an animated feature, a really good voice actor can really add to the experience.

        And you still need to make the character models move in realistic ways. So you need motion capture actors, or else truly skilled "puppeteers" to animate the models.

        All that said, I actually agree with you. Take a look at Killer Bean 2: The Party [jefflew.com] by Jeff Lew. One guy made this, using his computer at his home. I think it's really cool that people can just make movies now with only a tiny budget.

        steveha
        • by Syberghost ( 10557 ) <syberghost@@@syberghost...com> on Wednesday May 26, 2004 @02:05PM (#9261272)
          You do still need voice actors. With an animated feature, a really good voice actor can really add to the experience.

          Yes, but your pool is WAY more open.

          In the days before TV, ugly people with great voices were stars. Today, it's a lot harder for that to happen. (it does happen, but they aren't playing romantic leads.)

          An independant filmmaker can find an actor with a great voice, and it doesn't matter what he looks like, what his physical capabilities are, etc.

          A quadrapeligic could play James Bond.
      • by xswl0931 ( 562013 ) on Wednesday May 26, 2004 @12:57PM (#9260584)
        Depends on how you define "no real actors". "Real actors" were used for the voice work. If you ever played a video game where the voice acting was horrendous (about 80% of the time), then you know that good voice acting isn't that easy to come by. You also need talented animators to turn the 0's and 1's into emotion. In any case, Hollywood has always been more than just the actors, there's a whole production crew behind the picture.
      • I can't wait for the first CG/Anime crossover

        Uhh, there have already been some. The first one I've been looking for (Appleseed) is coming out in Japan soon, and it looks [apple.co.jp] quite badass. I've heard of a few crossovers before now, but can't think of any off the top of my head.
        • I'm pretty sure appleseed is already out in theatres in japan. We're just waiting for it to get released on DVD so it can be fansubbed. Of course, it would be really nice of em to release the jp DVD with an english subtitle track so people could buy the original.
          By the way, heres the trailer for appleseed. Quite a beautiful CG animation to behold. apple.co.jp trailer [apple.co.jp]
  • This is a great idea... I wonder if you could get a section of the frame(s) you (helped) to render...
    • by DetrimentalFiend ( 233753 ) * on Wednesday May 26, 2004 @12:15PM (#9260144)
      I beg to differ. I suspect that the reason why no one's ever bothered suggesting this is that the amount of bandwidth required to download the frame data and upload the rendered frame are prohibitively large. Besides that, the licensing costs for the rendering technology would be enormous, and what film company would want to freely distribute all of the models, textures, and animation that they spent dozens of man-years working on?
      • by YoJ ( 20860 ) on Wednesday May 26, 2004 @12:44PM (#9260464) Journal
        I agree with most of the comments so far about why the idea wouldn't work directly, but I'm more optimistic about the general idea. For example, there is a technique called "partial abstract interpretation". The idea is that given code and the input data, one can see what the code would do on the input data and then change the code to not accept any input and do the correct thing on that particular given input. If the company distributed code in this way, it would just be code and no data (so their artwork doesn't leak out), and the code would only work to generate one scene; it would be hard or impossible to uninterpret the code (so they doen't leak their proprietary rendering technology).
      • Rendering time (Score:5, Informative)

        by Gitcho ( 761501 ) on Wednesday May 26, 2004 @12:56PM (#9260579)
        How would you ever reproduce this on a distributed network of limited bandwidth home PC's ? Here's some LOTR rendering stats from Wired.com - [http://www.wired.com/wired/archive/11.12/play.htm l?pg=2]

        ... The Return of the King, which opens in theaters December 17, will feature almost 50 percent more f/x shots than The Two Towers and will be composed of more data than the first two movies combined. Churning out scenes like the destruction of Barad-dûr and the Battle of Pelennor Fields (with thousands of bloodthirsty CG Orcs) took 3,200 processors running at teraflop speeds through 10-gig pipes - that's one epic renderwall. What else went into making Frodo's quest look so good? By Weta's account, more than you might think.

        WETA BY THE NUMBERS

        HUMANPOWER
        IT staff: 35
        Visual f/x staff: 420

        HARDWARE
        Equipment rooms: 5
        Desktop computers: 600
        Servers in renderwall: 1,600
        Processors (total): 3,200
        Processors added 10 weeks before movie wrapped: 1,000
        Time it took to get additional processors up and running: 2 weeks
        Network switches: 10
        Speed of network: 10 gigabits (100 times faster than most)
        Temperature of equipment rooms: 76 degrees
        Fahrenheit Weight of air conditioners needed to maintain that temperature: 1/2 ton

        STORAGE
        Disk: 60 terabytes
        Near online: 72 terabytes
        Digital backup tape: 0.5 petabyte (equal to 50,000 DVDs)

        OUTPUT
        Number of f/x shots: 1,400
        Minimum number of frames per shot: 240
        Average time to render one frame: 2 hours
        Longest time: 2 days
        Total screen time of f/x shots: 2 hours
        Total length of film: Rumored to be 3.5 hours
        Production time: 9 months

    • by millahtime ( 710421 ) on Wednesday May 26, 2004 @12:17PM (#9260179) Homepage Journal
      I wonder if you could get a section of the frame(s) you (helped) to render...

      /.ers would combine their powers and probubally have a lot of the movie weeks before it was released.
  • by Baron_Yam ( 643147 ) on Wednesday May 26, 2004 @12:13PM (#9260117)
    Don't animators already insert single-frame porn, etc into these things?

    Can you imagine how quickly the client software would get hacked, and how crappy the movie resulting from nothing but single-frame porn shots would be, especially to photosensitive epileptics?
    • by Profane MuthaFucka ( 574406 ) <busheatskok@gmail.com> on Wednesday May 26, 2004 @12:20PM (#9260211) Homepage Journal
      Right. The gang of protagonists walk into a cave. But, the cave looks familiar somehow. It's the fingers holding the entrance widely open that tips us off. They don't belong there. We look closer at the cave and fear the worst for our band of animated heroes.

      That's not a cave, it's a space station.
    • by American AC in Paris ( 230456 ) * on Wednesday May 26, 2004 @12:26PM (#9260298) Homepage
      Amusing, but easily dealt with: triple the amount of work done.

      If you send the same input to three different IP addresses (extra-paranoid: use three different top-level IP blocks) and get the same result back, you can be reasonably certain that the result is valid. If there are -any- discrepancies in the images, assume that one (or more) was improperly rendered, discard all three, and try again with three new addresses.

      Even should you manage to hit three different IP addresses that return the exact same 'hacked' image, it's not exactly hard for an editor to step through the movie frame-by-frame, looking for discrepancies...

      • If you send the same input to three different IP addresses (extra-paranoid: use three different top-level IP blocks) and get the same result back, you can be reasonably certain that the result is valid.

        Actually, just double. First use "Comparison Mode". If the two come back different, resolve it by switching to "voting mode", doing a third frame at a third site and seeing which it agrees with. (If all three disagree you've got a systematic problem and you need to debug the whole project.)

        If there are
  • MPAA (Score:3, Interesting)

    by MandoSKippy ( 708601 ) on Wednesday May 26, 2004 @12:13PM (#9260121)
    I wonder if that would be considered pirating by the MPAA. Smart people out there would figure out a way to "download" the movie from the frame generated. Then there would be no reason to see it in the theater. Just playing the devils advocate. Personally I think it would be REALLY cool :)
  • by jbellis ( 142590 ) * <(jonathan) (at) (carnageblender.com)> on Wednesday May 26, 2004 @12:13PM (#9260124) Homepage
    Easy: Pixar and Dreamworks have both developped highly proprietary rendering technology. They're not about to just give copies to everyone who wants one. Even if the renderer itself weren't reverse-engineered, which isn't beyond the realm of possibility, it would likely be far easier to decipher the protocol used and voila, a functioning copy of [Pixar|Dreamworks]'s renderer.

    Lobotomizing it to the point where this wouldn't be useful would probably make it useless for distributing the workload as well.
    • by Picass0 ( 147474 ) on Wednesday May 26, 2004 @12:23PM (#9260253) Homepage Journal

      Both studios are using Renderman compliant renderers, so that's not the issue.

      And there's no reason that any one machine has to render an entire image file. You could have any node build N number of scanlines and send the packet back home.

      The risk would be someone running a port monitor on the return address, and re-assembling digital image files.
      • by Whalou ( 721698 ) on Wednesday May 26, 2004 @01:43PM (#9261068)
        Renderman is not a renderer it is a specification for interoperability between modeling tools and renderer (like XMI for software engineering tools except that it works.)

        Pixar's renderer is actually PRMan.

        From Renderman.org [renderman.org]:
        There are a lot of people when you hear them talking about RenderMan and how great the images are from it, etc. They are most likely really talking about Pixar's PhotoRealistic RenderMan® (PRMan).
        RenderMan is actually a technical specification for interfacing between modeling and rendering programs. From 1998 until 2000 the published RenderMan Interface Specification was known as Version 3.1. In 2000 Pixar published a new specification, Version 3.2. Coming soon Version 3.3
  • copyright (Score:3, Insightful)

    by ciscoeng ( 411359 ) on Wednesday May 26, 2004 @12:13PM (#9260127)
    How would the legal aspect work out? Seems like you'd have to sign a fairly strict license saying the movie studio still owns what your computer rendered, copyright, etc.

    Very cool idea nonetheless.
  • This is a great idea! You could even have a part of the credits that says a website you could go to to see who helped with the rendering, or even put a special thanks section on the DVD that says who rendered what.
  • Oh yeah (Score:5, Funny)

    by toygeek ( 473120 ) on Wednesday May 26, 2004 @12:14PM (#9260142) Journal
    I'll set up a cluster of old Pentium 200MMX's and put 128MB of ram on them... they'll be rockin! When people see my garage full of cables and ancient hardware and ask "WTF are you doing with all this crap?" I'll be able to say "rendering Shrek 3".

    Distributed computing for rendering a movie? I think they have enough hardware problems without getting the worm infected masses into the mix.
  • by millahtime ( 710421 ) on Wednesday May 26, 2004 @12:14PM (#9260143) Homepage Journal
    The film industry can afford it so...

    Why would they want to do the distributed??? They are using 10Gbs etho and blow your mind away servers to render at amazingly high rates. Probubally several times faster than something like the SETI network could imagine.

    And hell, those sysadmins have the most owerful systems in the world. Who would give that up? They even get whole new systems every couple years.
    • by 1984 ( 56406 ) on Wednesday May 26, 2004 @12:44PM (#9260467)
      They aren't "blow your mind" servers. Think PC-based hardware. A lot of servers, yes, but no special rocket science. The only high-end (ish) thing about the render clients is that they usually have plenty of RAM, from 1.5GB to 4GB each.

      The network, too, isn't going to be anything as exotic as 10Gb/s. In fact the only single component that's really high-end is the storage -- a lot of data, and hundreds of clients accessing it simulataneously.

      I work at an effects shop not a million miles from Pixar, and our rendering is done on a few hundred Athlons, some dedicated and some user workstations. Pixar is much bigger, and they have much more horsepower, but it's not orders of magnitude stuff.

      I think SETI@Home is probably a long way ahead in raw aggregate CPU performance. Probably less far ahead in memory surface (but still ahead). But you couldn't use SETI@Home for a reason mentioned by another poster in this thread: bandwidth to storage. The render pipeline has a lot of I/O in it, and your distributed clients would be forever waiting to read or write from network-distant storage. Efficiency would suck, and reliability, too.

      Even if you could do it, you wouldn't for issues of information security (which someone else mentioned here, too.)
    • Since hardware is becoming a commodity, and the budgets for these types of movies is huge (finding nemo production budget ~$94M, and shrek2 was ~$70M according to www.the-numbers.com) and at the price of a simple blade server (figure $2000-3000 for a 2xXeon-2 ghz/1gb ram) you can buy a substantial render farm if you are the contract render house for a film like this.

      The security and copyright issues are too big, compared to the low cost (for them) of a render farm. The other costs of a movie outweigh the h

  • by Anonymous Coward on Wednesday May 26, 2004 @12:15PM (#9260149)
    Nice to see you can advertise your NEW BLOG on slashdot...

    how much did it cost?
  • by marcsiry ( 38594 ) on Wednesday May 26, 2004 @12:15PM (#9260150) Homepage
    Films and other large productions are tightly scheduled, with costs against these schedules mapped out months in advance. I can't think of a producer who would count on an essentially unschedulable resource as a vital part of their production pipeline, regardless of its economy.

    That said, I could totally see a use for a 'render pool' catering to independent filmmakers, students, and nonprofits for whom cheap is more important than timely.
  • Data (Score:5, Interesting)

    by Skarz ( 743005 ) on Wednesday May 26, 2004 @12:16PM (#9260169)
    The problem with trying to help render frames is that your system needs to have the data to do it (3D objects, textures, etc.)- not to mention the renderer. Companies wouldn't take kindly to sending off their IP data (esp. custom 3D models/textures/shaders) to the masses to be hacked. Having people get a hold of the "official" Shrek models and textures for example would be a bad thing.
  • by jfroebe ( 10351 ) on Wednesday May 26, 2004 @12:16PM (#9260170) Homepage
    Do you really want the MPAA to run programs on your computer?
  • by jarich ( 733129 ) on Wednesday May 26, 2004 @12:16PM (#9260174) Homepage Journal
    Previous posters are right... no one like Pixar would ever give out that kind of technology...

    But they could tell everyone they were, just have a screen saver that pegs the CPU, tells you that you've rendered X frames, and displays a cool screensaver from the movie! :)

    Great PR, no loss of technology, lots of pissed off fans, once they realize the truth!

  • There's a difference between this and other distributed computing projects. Other DC projects have a "good of mankind" kind of goal to them, and are unlikely to be targeted maliciously. A commercial project like rendering a cartoon would have to be extra careful in regards to security. While crackers may feel it is wrong to disrupt or otherwise harm people trying to find a cure for cancer, they may find it funny to distort a rendered picture in a cartoon.

    I'm thinking something along the lines of Tyler Dur
  • by reality-bytes ( 119275 ) on Wednesday May 26, 2004 @12:18PM (#9260182) Homepage
    The main reason they don't employ this technique is that their own 'render-farms' are a known quantity; they can, with reasonable accuracy, calculate how long a given scene will take to render, whereas with public distributed computing this calculation is not possible.

    There are many variables in distributed public computing such as:

    *Different CPU capabilities.
    *Different OS capabilities
    *High/Low use Systems
    *People's 'uptime'
    *Users leaving the project before its completion etc.

    Another risk is that another movie-house could start a production which everyone sees as 'cooler' and your entire userbase decides to up-sticks and render for them instead.
  • A scene can be a pretty large and complicated piece of data, although I suppose it might be comparable to SETI@Home data. And once you ship the whole scene out, there's the risk that someone could capture it and start rendering it from every angle to get their own private sneak preview. And then the return image is also a pretty large bit of data. So, while there is no inter-node communication which makes this a good distributed problem, the node-server communication is still pretty intense. GigE in your re
  • I wonder how that would work out with plot spoilers and the l like. Presumably, people who lend their CPU power for this would go to online forums where they would discuss their experiences, and at some point someone might have the idea of trying to piece bits of the film together independently of the movie studio.
    Or maybe my computer just happens to render the climactic scene in the movie, and I tell my buddies in Slashdot or wherever.
  • I can see this work out if they apply some serious security mechanism, to prevent people from posting all the results on 1 single site to get sneak previews, and to make sure that malicious people aren't sending data back with some hidden 'messages' embedded in the background (but I guess they could have more than 1 machine render the same scene, and compared i.e. the md5 hashes) and such.

    On the other hand, I can also see why this won't work, as this would be a huge technical support nightmare, the potenti
  • Just activate that webcam on top of the monitor, pointed at the user. You're going to see plenty of Shrek's at home.
  • by Kaa ( 21510 ) on Wednesday May 26, 2004 @12:20PM (#9260217) Homepage
    Mine! Mine! You filthy thieves!! All you want is to get your hands on frames from MY movie and then you'll mix it with porn, put it on P2P networks and use the proceeds to fund terrorism!

    It's my movie! MINE! You want a screensaver -- well, pay in DOLLARS for it, you dirty pirate (* by clicking here you agree that your credit card will be automatically charged $0.99 each time your screensaver kicks in)! And note that you are licensed to use MINE screensaver on just machine by just one user and that our DRM system will make sure of that (* fingerprint reader, purchased separately required for system activation and use)!

    Thieves, all of you are thieves! Hah, give them movie frames to render... What, you think me stupid?
  • This is not such a good idea, and prone to mischievous hackers. Speak up now unless you are open to the idea of seeing Goatse get a generous amount of screen time in "Shrek 3".
  • I heard that they were already doing it... the rendering software gets back-door installed alongside Gator or Kazaa. It's mentioned in the part of the EULA that's written backwards in Pig-Esperanto.
  • by jmpresto_78 ( 238308 ) on Wednesday May 26, 2004 @12:21PM (#9260225)
    How cool would it be to see them allocate THEIR distributed system to projects like SETI, etc. Even though I'm sure there are other projects being worked on, one would imagine the system is pretty dormant after a release.
  • There are a lot of responses about the anonymous public stealing images, movies, code, etc... What about using the distribution technology only inside the company? How many computers does Pixar have - including every single PC on the business side? Would there be a benefit in distributing calculations over all the PCs in the company in the manner that other distribution algorithms use (like the SETI@Home example)? In this scenario, they may get some extra number crunching for little cost.
  • Each client must have:

    Some shit hot rendering software that probably won't be worth running on joe computer.

    Enough[shit loads of] information about the scene to render a frame.

    Yeh, great idea, just give me a copy of Maya and a few complete models and textures from Shrek 3 and I'll buy a nice fat pc to render it all on.
  • What they would need is a way to encrypt the images that you are rendering to protect them from being seen. I'm sure that they would not want people to see the frames before they are done and that's a major reason for not doing such a thing.
  • Security issues will prevent this from happening. The studios don't like early leaks about upcoming films and this would certainly open the floodgate for early leaks.

    You can see how upset the studios have gotten over preview versions of films that get leaked by reviewers or others.
  • by exp(pi*sqrt(163)) ( 613870 ) on Wednesday May 26, 2004 @12:25PM (#9260280) Journal
    ...data, nay terabytes of data, can go into a single frame in a movie? You might be able to farm out stuff like some fragments of procedurally rendered smoke that rely on computing noise functions repeatedly, rather than accessing a scene database, but in general this is completely impractical. If visual effects houses wish to share data the easiest thing to do is FedEx a bunch of hard drives. So unless Shrek@Home includes some kind of hard drive exchange program it ain't gonna work!
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Wednesday May 26, 2004 @12:26PM (#9260302) Homepage Journal
    At Pixar, distributed rendering, even within the same building, was sometimes I/O bound rather than compute
  • by Obasan ( 28761 ) on Wednesday May 26, 2004 @12:28PM (#9260315)
    Only the most high end of machines could even consider attempting to render even one layer of a frame for this kind of animation. We're talking systems with 2-4GB of RAM as a minimum (preferably 4+) and the scene files/textures would weigh in the tens to thousands of megabytes that must be downloaded for each scene. Think uncompressed TIFF or TARGA texture files that might be 5000x5000 at 40 bits/pixel.

    Even on high end machines they often do not render a full frame, but a layer of a frame which is then composited with other layers into the full frame. Why? Many reasons but one of them is that even the high end machines don't have enough RAM and the render would take too long (the machine would need to swap).

    So aside from the issues of fans returning bogus data, or extracting highly proprietary information out of the client as other threads have mentioned, this would be a real show stopper. Breaking the problem into small enough pieces to be handled by joe-blow's computer would be prohibitive and require tons of calculations to figure out which pieces of textures are actually required for a given piece of rendering etc. It would probably require a compute farm just to manage it!

    Rendering is also a lot more complex than you might think, there are render wranglers who manage the rendering queues and look at the outputs... many renders may require specific versions of the rendering software, so a frame that rendered with 2.7.2.1 won't render anymore without errors with 2.7.2.2... so many copies of the software are managed in parallel with the wranglers helping to clean up the errors. How would you manage this in a distributed client environment?

    Furthermore most of the proprietary rendering apps are certified against VERY specific platforms, eg. one specific kernel version and build level, specific versions of shared libraries etc.

    Long and short is there's a reason why movies cost millions. :)
  • by gspr ( 602968 ) on Wednesday May 26, 2004 @12:28PM (#9260317)
    Users gladly contribute their spare CPU cycles to fold proteins for a non-commercial purpose, or help a non-profit organization seek out alien life. These are tasks affecting all of mankind.
    Giving away CPU cycles so that a multi-million dollar company can improve its product is a wholly different thing.
  • Cost Cutting? (Score:3, Interesting)

    by syntap ( 242090 ) on Wednesday May 26, 2004 @12:28PM (#9260323)
    Great, maybe for saving some effects company from shelling out for a few more $10K graphics servers with which they will make the next $150M movie, perhaps I can loan them a few CPU cycles and they'll cut down my move ticket cost from $10 to $9.75.
  • Why bother? (Score:3, Insightful)

    by hal2814 ( 725639 ) on Wednesday May 26, 2004 @12:29PM (#9260327)
    It's hard enough to solve issues regarding parallel processing of images in a clustered environment they can control. Why put that process in an environment they can't control? It's not like movie studios can't afford a computer cluster. That's a small cost compared to the cost of hiring someone to write the distributed software they use.

    From what I've read, Seti@Home works well because users heavily process a small amount of data and return a small solution. If we were processing frames, it would require the user to take in large amounts of data and return even larger results.
  • Good Luck (Score:4, Interesting)

    by Anonymous Coward on Wednesday May 26, 2004 @12:29PM (#9260328)

    The last film I worked on, we had anywhere from 800MB to 12GB of data per frame that the renderer had to have. I am talking about compressed renderman rib archives, textures, normal maps, displacements, shadow and other maps.

    The data was mostly generated at render time for things like hair and shadow maps, but if it was being distributed, there is no way to do that - they would be transferred beforehand.

    Also, there are always many terabytes of data generated by the renderers for each render layer, for diffuse color, specular color, etc.

    It is just not feasible to transfer all that data around, and its not like bittorrent or other p2p systems will help much with that since each frame would most likely only be rendered by a few people (for verification).

    Also, the model geometry and shaders (and somtimes textures) are closely guarded secrets... In short, if a major film were ever to do somthing like this, everyone participating would need huge (> 100mbit) bandwidth and a LOT of disk space and also be under very tight NDAs.

  • Duh (Score:3, Insightful)

    by gerardrj ( 207690 ) on Wednesday May 26, 2004 @12:31PM (#9260348) Journal
    Because the production of a blockbuster movie tends to be kept a secret up until near the premier. distributed computing provides little to no security.

    There's no way a studio could send a scene's model to a compute node encrypted, process it encrypted, store the interim image encrypted, then send the whole mess back encrypted. At some point in processing the information must be in plain computer processable formats.

    What that boils down to is that a competing studio could sign up hundreds of compute nodes and get a preview of the story line and animation. Anyone who could gather enough images could piece together clips from the film and release them in full digital format. Imagine a nefarious group of nodes all collecting the images they generate and later piecing them all together in to perfect digital non-DRMed copy of the movie; before release and before the DVD is available.

    Hollywood can't stand the idea of people copying DVDs to the internet, could you imagine what they'd think of full film resolution copies of their films floating around? The heads bits: on the walls.

    No... this is just a stupid suggestion from the point of view of the studios. At least until there's and OS is produced where a user it prohibited access to certain portions of RAM, and can't intercept the network traffic to/from the box.
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Wednesday May 26, 2004 @12:32PM (#9260359) Homepage Journal
    At Pixar, distributed rendering, even within the same building, was sometimes I/O bound rather than compute-bound. The problem is that high-quality rendering requires a lot of textures, some of which are algorithmic (a function in shading language) and some photographic (a big image file, probably not compressed because compression would add artifacts). So, you have to ship a lot of files around to render a scene. Photographic textures may be large because of the scale problem. They may have to look right through a 100:1 zoom, as objects move and the camera viewpoint changes.

    This is not just tracing all of the rays in the scene.

    Bruce

  • by bobhagopian ( 681765 ) on Wednesday May 26, 2004 @12:37PM (#9260410)
    Despite the coolness of the SETI project, the major reason I support SETI and other scientific projects (e.g., protein folding@home) is that they are notoriously underfunded. SETI and the organization which operates folding@home (Stanford?) do not make profit, and at each step have to literally beg the government (usually the NSF) for more grants. This is especially true of SETI, which has become a pretty out-of-fashion program in funding circles. In short, the whole point of donating CPU cycles is to allow somebody access to computing power that it would not otherwise have. While I enjoy the Shrek movies just as much as the other guy, I'm not so philanthropic when it comes to a company that's capable of making $128 million in one weekend. Here's an analogy: you might donate clothing to the Salvation Army, but would you donate to Sak's Fifth Avenue? I think not -- I suspect many of you, like me, would rather support the little guy with no alternative.
  • Stupidity (Score:3, Interesting)

    by WormholeFiend ( 674934 ) on Wednesday May 26, 2004 @12:37PM (#9260416)
    I wouldnt do it for the same reason I refuse to buy clothes with logos on them that advertise the maker of those clothes, or some other form of advertisement.

    If someone wants me to wear such advertisement-enhanced clothes, they should pay me for the priviledge.

    Same with computer cycles. I pay the electricity. If they plan on making money from the product of the cycles I give them, they should pay me.

    However, I have no problem giving away free computer cycles to non-profit scientific endeavors.
  • Umm...No (Score:3, Insightful)

    by retro128 ( 318602 ) on Wednesday May 26, 2004 @12:39PM (#9260428)
    I could see this for an indie project, but no way for a feature film. The reason why, I think, is because that in order for a computer to start rendering a CGI frame, it must have several things: Geometry, textures, lighting algorithms, and any procedurals needed to make things like fur, hair, realistic water effects, etc. Now, if I were PDI, or any company that has spent millions in R&D in creating these things, do you think I would want this info on Joe Schmoe's computer just waiting to be opened up and reverse engineered? I don't think so.
  • by steveha ( 103154 ) on Wednesday May 26, 2004 @12:42PM (#9260447) Homepage
    There is a great article about how ILM does their rendering. It was a cover story in Linux Journal magazine.

    http://www.linuxjournal.com/article.php?sid=6783 [linuxjournal.com]

    People have been saying that even if the studio didn't care about the security issues, there are bandwidth issues that would keep this from really working. There are a few quotes in the article that confirm this: all the rendering machines make a sort of denial-of-service attack on their NFS servers, for example. And the article talks about their VPN, which they call the ILM Conduit; it sends everything double-encrypted with Blowfish. They really are worried about security.

    The coolest thing, to me, is that ILM has rolled out Linux all the way across their organization; people run Linux on their desktop computers. When people go home at night, their computers get added to the render farm!

    steveha
  • Render Times (Score:5, Insightful)

    by TexTex ( 323298 ) * on Wednesday May 26, 2004 @12:42PM (#9260450)
    Awhile ago, John Lasseter of Pixar was in some promotional documentary for one of their films. He claimed that when they originally created their short film with the desklamp, render times were around 7 hours per frame.

    He said that for Finding Nemo today, render times were about...7 hours per frame.

    More machines and faster processors let you cram much more detail and technology into the same package. Working in commercial advertising, digital editing and graphic workstations are fantastic and powerful...but their advantage isn't speed. We spend the same amount of time making a commercial as 10 years ago...but now we make 7 versions and change it 30-some times along the way. Power gives you the ability to change your mind....and that's a creative force which people gladly pay for.
  • Bandwidth. (Score:3, Insightful)

    by stephenisu ( 580105 ) on Wednesday May 26, 2004 @12:45PM (#9260477)
    The sole reason this will not work using current internet infrastructure is bandwidth.

    In the making of Final Fantasy, it took longer to send the information to the nodes than it took the nodes to process it. That is with dedicated gigabit networking.
  • by Tarrek ( 547315 ) on Wednesday May 26, 2004 @12:46PM (#9260484)
    The only way I see for this to be feasible is if each user was only rendering a tiny segment of each frame. I don't know if this is technically possible, but, it would reduce the massive bandwidth needs to a more SETI like level.

    Secondly: Users cannot see what they have rendered. This is a given, as has been pointed out a thousand times already, this is insane from a security and PR standpoint. INSTEAD, simply let users who participate on a regular basis have access to a private forum, developer blogs, and grant them access to the official PR material slightly before it gets published. It's less cool, sure, but it could work.
  • by sunking2 ( 521698 ) on Wednesday May 26, 2004 @12:55PM (#9260571)
    Now you can subsidize the movie industry with your computer and electricity.
  • by Anonymous Coward on Wednesday May 26, 2004 @01:03PM (#9260642)
    From: http://www.aspenleaf.com/distributed/ap-art.html#i mp The Internet Movie Project renders images for computer-animated movies. The project is an open-source collaboration of volunteers and is just for fun. It is still in the development phase, but you can volunteer to be a "render-farmer," to render images for test animation sequences. Anyone who can run the free POV-Ray ray-tracing program can join this project, although the supporting scripts and software needed for the project only work on the Windows and Linux platforms for now.
  • by tinrobot ( 314936 ) on Wednesday May 26, 2004 @01:03PM (#9260644)
    As someone who works in a digital studio, it's painful enough getting things rendered with every computer in the same room. Frames get dropped, mangled, lost. In addition, every machine needs to be at the same software revision, and you can't have conflicting apps running. Scattering the render boxes across the planet and having boxes that contain unknown software will only amplify the pain to the Nth degree.

    Added to that are huge bandwith problems. In order to render a 2K image, you may need dozens of texture maps, some of which may be even larger than 2K because you zoom in or something -- meaning to get a 2K frame back, you're sending the render box probably 10-20 times that amount of data. With a nice gigabit internal network, that's not a huge problem, but shipping them down a DSL line is just not gonna happen.
  • Why not? (Score:3, Interesting)

    by GreyyGuy ( 91753 ) on Wednesday May 26, 2004 @01:28PM (#9260935)
    Let's see...
    • Security issues- could anyone running the tools see the results? Or worse- change them? It would be a huge pain in the ass to review every image to make sure someone didn't throw in a one-frame wardrobe malfunction or the like
    • Copyright issues- if I own the machine that produces the frame do I have any copyright ownership of it? Even if I don't, how many lawsuits of people that want money would it take to eat up the savings from not getting the machines and doing it yourself?
    • Competitors- How easy would it be for a competitor to screw it up? Either running clients but not letting them send data or send bad, gibberish data instead? How much time and money would have to be spent to check for that?
    • Why would you do it? Donate your time, processing cycles, and bandwidth to a company that is going to make money off it and not even give you free tickets for your effort? Not to mention that legally, as I understand it, a for-profit company is not allowed to have unpaid volunteers. People working for a for-profit company have a legal obligation to treat them as employees. That bit AOL and Everquest a few years ago when they had unpaid communitee volunteers in charge of stuff, but don't anymore because of that.
  • by tolldog ( 1571 ) on Wednesday May 26, 2004 @01:53PM (#9261163) Homepage Journal
    From first hand experience... this won't happen, not for a long long time, if at all.

    We used thousands of processors to render. We had terabytes of storage. It is a large undertaking. Every single frame and element of the frame had to be tracked. It had to be qualified. If something didn't work, we had to diagnose the system and get it back up and running. This is something that is too large of budget for a home brew system to work.

    With other distributed systems, there are some checks and balances on the data ran, a way to know if you are sending back somewhat good data. The only way you can tell with this is to visually inspect the end result. If a person has a system that returns a bad slice of a frame, you now have to recreate that slice and track it, because its possible the problem is in the code, in the data files or it was a one time glitch with the system. Not a fun thing to do for hundreds of remote systems that aren't similar.

    Render time also varies. It can be 5 minutes to 12+ hours. If a job gets halted, you lose that data, and have to recreate it. This isn't like generating millions of keys. There isn't a second init time before turning out data. At a previous studio, we had scene load times of over 30 minutes before it even started rendering. That needs to be accounted for in how you split up frames. If you have 30 minutes to load (after 45 minutes to download the data) and only render for an hours worth, you are getting a heavy hit on over head.

    There are just too many issues with this working in a current setup. Stick to crunching numbers.

    -Tim
  • I'd rather... (Score:3, Insightful)

    by sharkdba ( 625280 ) on Wednesday May 26, 2004 @01:54PM (#9261181) Journal
    help finding a small pox [ud.com] vaccine than helping the already way-too-rich entertainment industry.
  • by anti_analog ( 305106 ) on Wednesday May 26, 2004 @02:11PM (#9261321)
    I believe I saw someone earlier mention how there can be terabytes of data go into a single frame of CGI film, and these days that can be pretty correct.

    A .rib file or similar type file for PDI's renderer will probably contain a few million polygons and/or a few hundred thousand control verticies for implicit surfaces such as nurbs and sub-Ds, which can be a lot of data (my scene files at work average 4-5 million polygons and are about 150 megs on average, saved in a binary file format). And, that doesn't include particles, procedurals, all the motion data so that proper motion blur can be calculated...

    And then the textures... They do use lots of procedurals, but they also use lots of 16 bit per channel textures of 4000x4000 for face textures, or even higher. Some people are using tiles if 16 bit tiffs for displacement maps now that equate to like a 100,000x100,000 image for displacement maps, because the accuracy requirements for close up renders are so bloody high. That can be many many gigs of data there.

    And, if you're raytracing like in Shrek 2, then you need to have as much of that data in RAM at once, or else render time spirals out of sensibility, unlike scanline renderman where swapping is easier, because the rays bouncing throughout the scene make scene divisions more difficult (but still possible).
    I work with 4 gigs of RAM and we can just barely render 6 million polygons + a few 4k displacement maps all raytraced at once (in windows unfortunately). And, when we render sequences and stuff, we often almost kill our network because distributing all this data to just 20-30 rendernodes is pretty tough (and how would that scale to a big renderfarm with thousands of rendernodes...)

    So, yeah, like everyone else is saying, bandwidth limitations and that people running the screen saver probably don't have the hardware and OS to really run 4+ gigs of RAM, this Shrek@home idea seems rather unlikely. It would be cool though, if it worked...

    Hooray for my totally unoriginal post!
  • by Telcontar ( 819 ) on Wednesday May 26, 2004 @02:16PM (#9261371) Homepage
    will deal with that. You may seen a generic screensaver, but I doubt that a company would risk leaking their movie. However, it is possible to perform certain calculations (such as addition and multiplication) in a way such that the clients in this scenario would work on encrypted values.

    The calculations are done in encrypted values and returned as such. The host can then decrypt the result.

    This sounds pretty amazing but consider addition as a starter: The host uses a one-time-pad for each number and XORs them. The client adds the encrypted numbers. When you add the numbers, the host only needs to XOR all the keys on the result and gets the true result! The client, though, knows NOTHING about the true values (the protocol is information theoretically secure), as the XOR turns them into "signal noise".

    I imagine, though, that the effort of implementing this probably outweighs the benefits for a project like rendering a movie. But for truly mission-critical data, it may be worth it...
  • by bossesjoe ( 675859 ) on Wednesday May 26, 2004 @02:31PM (#9261498)
    The movie industry, give you something like that for free? I doubt it, maybe if you paid them so they could render on your computer....
  • Too Huge A Job (Score:4, Informative)

    by Caraig ( 186934 ) on Wednesday May 26, 2004 @02:34PM (#9261521)
    Rendering a movie is more than just handing PoVRAY a set of data and telling it to render. Distributed computing will not be able to handle it for a lot of reasons.

    First off, what is rendered by the computer is not what you see on screen. There are perhaps a dozen object layers that are rendered individually and composited in the postproduction phase. So, for example, Shrek might exist on one layer, the donkey on another, the ground on a third, some foreground objects on a fourth, several layers of background objects on the fifth through tenth, et cetera.

    Now, each object layer will also be split into several render layers, for color, shadows, specularity, reflectivity, transparency, and probably several others that I can't think of right now. It is not an exaggeration to say that a single frame of a completely CGI scene can be made up of upwards of fifty individual frames, all composited together in post.

    Why is this done? First off because it's easier to edit and change one of these layers and re-render it, than to change and re-render the entire scene. If Shrek is too gruesomely gleaming, but Donkey is just fine, you just have to edit Shrek's specular layer. This is easilly done in any professional postproduction software package. Alternatively, if it's completely wrong, you just have to re-render that specific layer -- saves a LOT of time! Some post tools are extremely powerful, which makes rendering to object/render layers very appealing.

    Now, while you could conceivably do Shrek@Home, you would need a fairly large render program -- and you're already distributing a very powerful program, which the people who wrote it would be very uncomfortable doing. Secondly, the processing power in even high-end PCs is going to be jack compared to what they have in render farms, and they have a lot more of those computers besides. Rendering is very processor-intensive, too. It's a complex mathematical process that can take hours. Many computers will chug along at 99% load on the processor because they HAVE to.

    Add to the fact the stake in the heart of this idea: that the producers want reliability first and formost. An in-house render farm, or even renting time at a farm (an idea I've sometimes played with) that signs and seals and delivers is going to be reliable and dependable or they will know exactly whose head needs to roll. If you start having half the internet rendering the million or so frames of your blockbuster, who do you hold accountable when the deadline comes and you're short 1000 various random frames?
  • right... (Score:3, Insightful)

    by sad_ ( 7868 ) on Wednesday May 26, 2004 @02:38PM (#9261559) Homepage
    first we would render parts of the movie on our own pc's and if we would like to go to see the movie in the theatres we'd have to pay 6.5 euro for something i helped create.
    next i won't be able to play the dvd legaly (which i had to pay for again) on my linux box.
    can't wait to start...
  • Why? (Score:3, Funny)

    by Mustang Matt ( 133426 ) on Wednesday May 26, 2004 @02:48PM (#9261674)
    "I'd gladly trade some spare CPU time in exchange for the coolness of seeing a few frames of Shrek 3 rendered on my screensaver!"

    I wouldn't. What a waste.
  • Need rapidly fading (Score:4, Informative)

    by Dark Bard ( 627623 ) on Wednesday May 26, 2004 @02:51PM (#9261700)
    I work in the industry and the need for large render farms is going away soon. Workstation level video cards are capible of rendering scenes at or near real time. The big problem has been software support. The first commercial product to do this is already on the market. It's called Gelato. It comes from NVidia and works with most of their workstation level cards. It'll take a few years for the new technology to settle and support all the animation packages and operating systems but eventually everyone will have some form of card support for rendering. Each artist will simply render the final scene at their workstation. The two biggest technical problems, rendering and storage, are rapidly becoming nonissues.
  • by Stonent1 ( 594886 ) <stonent AT stone ... intclark DOT net> on Wednesday May 26, 2004 @03:27PM (#9262007) Journal
    Allow movie companies access to the virtual farm at a rate of 1 cent per frame or something like that. Then the person who renders the frame or who has contributed X number of render units has the option to have the micropayments sent to a charity, open source project or site of their choosing.

Time is the most valuable thing a man can spend. -- Theophrastus

Working...