Rendering Shrek@Home? 345
JimCricket writes "There's an interesting piece at Download Aborted about using distributed computing (a la SETI@Home, Grid.org, etc.) in the film industry. With the recent release of Shrek 2, which required a massive amount of CPU time to complete, one must wonder why the film industry doesn't solicit help from their fans. I'd gladly trade some spare CPU time in exchange for the coolness of seeing a few frames of Shrek 3 rendered on my screensaver!"
Doubt it'll happen... (Score:5, Insightful)
Re:Doubt it'll happen... (Score:3, Interesting)
Re:Doubt it'll happen... (Score:5, Informative)
There'd be no sound.
I'm sure people would sit through it anyway, though.
Sounds like imp.org (Score:5, Informative)
My big question is why would you rather donate to a large commercial organization well funded from it's previous Shreck flick -- rather than donate the cycles to a project like the IMP works themselves?
Re:Sounds like imp.org (Score:4, Insightful)
Re:Sounds like imp.org (Score:3, Interesting)
Re:Doubt it'll happen... (Score:5, Interesting)
If they could find a way to offload some intermediate calculations (like deformations of hair or fabric or something that can be used as an intermediate result in a scene) then that might be a clever use for a distributed.net [distributed.net] style technique.
-fren
Re:Doubt it'll happen... (Score:5, Insightful)
the film studios I'm sure have crazy fiber/multi-gigabit interconnects within their rendering farms.
While the amount of data to move around probably is too much for dialup, gigabit ethernet is certainly fast enough, and dirt cheap as it's integrated on motherboards. If you look at the top500 list, you see that weta digital (the company which did the CG for lord of the rings IIRC) has a couple of clusters on the list, and they have gig ethernet.
Basically, while rendering is cpu-intensive it's not latency sensitive, so there's no point in blowing a huge amount of cash on a high end cluster interconnect.
Re:Doubt it'll happen... (Score:3, Interesting)
Re:Doubt it'll happen... (Score:5, Insightful)
SETI and Folding@Home work because of the massive asymmetry between the amount of data and the CPU power required, and although you _perhaps_ could find subtasks that could easilly be "offsourced" so to speak, that made sense performance wise, I very much doubt that it would interface very nicely with the way the artists work, or make any sort of economic sense.
Re:Doubt it'll happen... (Score:5, Informative)
I can give you a little data here. Take a look at this image [reflectionsoldiers.com] I made. The scene is roughly 1.5 million polygons, and virtually everything is textured. The folder containing the bare bones version of this scene is roughly 600 megabytes. I could probably cut that size in half via JPEG etc, but we're still looking at a massive amount of data to send to one person to render a frame. I know this because I seriously discussed sharing the rendering with a friend of mine on the east coast. We both felt it'd take longer to ship than it would to render.
I doubt this scene is anything close to what they were doing in Shrek 2, let alone whatever will happen with 3.
I suspect you're wrong... (Score:5, Informative)
What is much more likely is that the grass, skin, hair, etc. is described by some relatively simple input parameters from which millions of polygons are generated. The "rendering" process almost certainly includes generating the polygons from raw input data and seeded random number generators through perlin noise distribution routines through fractal instantiation through spline generation through polys to a rendered frame as the final product.
However, much of that work would only have to be done once, then shots taken from different angles on the resulting textured polygon structure, whereas on a distributed architecture any info that isn't sent to your machine would have to be regenerated for your machine. Not to mention that memory requirements are likely to be pretty darn high.
Re:I suspect you're wrong... (Score:5, Insightful)
That's a very good point. Procedural elements of rendering could be distributed quite efficiently. Shrek 2 had some awesome smoke looking effects that I bet was very CPU intensive. That's exactly the type of thing that could be distributed.
Re:I suspect you're wrong... (Score:4, Insightful)
Re:Doubt it'll happen... (Score:3, Interesting)
Re:Doubt it'll happen... (Score:5, Interesting)
So what happens when a few talented indies get their paws on the processing power required to blow the doors off of convetional actors? It won't be goodbye to Hollywood just yet but I can't wait for the first CG/Anime crossover. I can't imagine how Cowboy Bebop would fare if it didn't have the cartoon stigma.
...a whole new world (Score:5, Interesting)
You do still need voice actors. With an animated feature, a really good voice actor can really add to the experience.
And you still need to make the character models move in realistic ways. So you need motion capture actors, or else truly skilled "puppeteers" to animate the models.
All that said, I actually agree with you. Take a look at Killer Bean 2: The Party [jefflew.com] by Jeff Lew. One guy made this, using his computer at his home. I think it's really cool that people can just make movies now with only a tiny budget.
steveha
Re:...a whole new world (Score:5, Insightful)
Yes, but your pool is WAY more open.
In the days before TV, ugly people with great voices were stars. Today, it's a lot harder for that to happen. (it does happen, but they aren't playing romantic leads.)
An independant filmmaker can find an actor with a great voice, and it doesn't matter what he looks like, what his physical capabilities are, etc.
A quadrapeligic could play James Bond.
Not exactly "no real actors" (Score:4, Insightful)
Re:Doubt it'll happen... (Score:3, Interesting)
Uhh, there have already been some. The first one I've been looking for (Appleseed) is coming out in Japan soon, and it looks [apple.co.jp] quite badass. I've heard of a few crossovers before now, but can't think of any off the top of my head.
Re:Doubt it'll happen... (Score:3, Informative)
By the way, heres the trailer for appleseed. Quite a beautiful CG animation to behold. apple.co.jp trailer [apple.co.jp]
Re:Doubt it'll happen... (Score:5, Interesting)
The heck with that, why would they want the 3D wireframe models to get out on the net? What do people think the frames are rendered from, anyhow? I predict it would be less than one week between someone figuring out how to extract the models, and someone else making a low-res animation of those models doing the nasty with each other.
I had this idea a long time ago :) (Score:2)
Re:I had this idea a long time ago :) (Score:5, Insightful)
Re:I had this idea a long time ago :) (Score:4, Insightful)
Rendering time (Score:5, Informative)
WETA BY THE NUMBERS
HUMANPOWER
IT staff: 35
Visual f/x staff: 420
HARDWARE
Equipment rooms: 5
Desktop computers: 600
Servers in renderwall: 1,600
Processors (total): 3,200
Processors added 10 weeks before movie wrapped: 1,000
Time it took to get additional processors up and running: 2 weeks
Network switches: 10
Speed of network: 10 gigabits (100 times faster than most)
Temperature of equipment rooms: 76 degrees
Fahrenheit Weight of air conditioners needed to maintain that temperature: 1/2 ton
STORAGE
Disk: 60 terabytes
Near online: 72 terabytes
Digital backup tape: 0.5 petabyte (equal to 50,000 DVDs)
OUTPUT
Number of f/x shots: 1,400
Minimum number of frames per shot: 240
Average time to render one frame: 2 hours
Longest time: 2 days
Total screen time of f/x shots: 2 hours
Total length of film: Rumored to be 3.5 hours
Production time: 9 months
I doubt it... (Score:5, Funny)
/.ers would combine their powers and probubally have a lot of the movie weeks before it was released.
Re:I doubt it... (Score:4, Insightful)
The simple answer is to see if you can. It's
Making things worse (Score:5, Funny)
Can you imagine how quickly the client software would get hacked, and how crappy the movie resulting from nothing but single-frame porn shots would be, especially to photosensitive epileptics?
Re:Making things worse (Score:5, Funny)
That's not a cave, it's a space station.
Re:Making things worse (Score:5, Interesting)
If you send the same input to three different IP addresses (extra-paranoid: use three different top-level IP blocks) and get the same result back, you can be reasonably certain that the result is valid. If there are -any- discrepancies in the images, assume that one (or more) was improperly rendered, discard all three, and try again with three new addresses.
Even should you manage to hit three different IP addresses that return the exact same 'hacked' image, it's not exactly hard for an editor to step through the movie frame-by-frame, looking for discrepancies...
Actually, just double. (Score:3, Interesting)
Actually, just double. First use "Comparison Mode". If the two come back different, resolve it by switching to "voting mode", doing a third frame at a third site and seeing which it agrees with. (If all three disagree you've got a systematic problem and you need to debug the whole project.)
If there are
Re:Making things worse (Score:5, Insightful)
It's not that hard--especially when you consider that old-school cartoons had people drawing every freakin' frame of a feature-length movie by hand...
MPAA (Score:3, Interesting)
This made the front page? (Score:5, Informative)
Lobotomizing it to the point where this wouldn't be useful would probably make it useless for distributing the workload as well.
NOT proprietary rendering technology (Score:5, Informative)
Both studios are using Renderman compliant renderers, so that's not the issue.
And there's no reason that any one machine has to render an entire image file. You could have any node build N number of scanlines and send the packet back home.
The risk would be someone running a port monitor on the return address, and re-assembling digital image files.
Re:NOT proprietary rendering technology (Score:5, Informative)
Pixar's renderer is actually PRMan.
From Renderman.org [renderman.org]:
There are a lot of people when you hear them talking about RenderMan and how great the images are from it, etc. They are most likely really talking about Pixar's PhotoRealistic RenderMan® (PRMan).
RenderMan is actually a technical specification for interfacing between modeling and rendering programs. From 1998 until 2000 the published RenderMan Interface Specification was known as Version 3.1. In 2000 Pixar published a new specification, Version 3.2. Coming soon Version 3.3
copyright (Score:3, Insightful)
Very cool idea nonetheless.
Great Idea! (Score:2)
Oh yeah (Score:5, Funny)
Distributed computing for rendering a movie? I think they have enough hardware problems without getting the worm infected masses into the mix.
Re:Oh yeah (Score:4, Funny)
Would it be worth it???? (Score:5, Insightful)
Why would they want to do the distributed??? They are using 10Gbs etho and blow your mind away servers to render at amazingly high rates. Probubally several times faster than something like the SETI network could imagine.
And hell, those sysadmins have the most owerful systems in the world. Who would give that up? They even get whole new systems every couple years.
Re:Would it be worth it???? (Score:5, Informative)
The network, too, isn't going to be anything as exotic as 10Gb/s. In fact the only single component that's really high-end is the storage -- a lot of data, and hundreds of clients accessing it simulataneously.
I work at an effects shop not a million miles from Pixar, and our rendering is done on a few hundred Athlons, some dedicated and some user workstations. Pixar is much bigger, and they have much more horsepower, but it's not orders of magnitude stuff.
I think SETI@Home is probably a long way ahead in raw aggregate CPU performance. Probably less far ahead in memory surface (but still ahead). But you couldn't use SETI@Home for a reason mentioned by another poster in this thread: bandwidth to storage. The render pipeline has a lot of I/O in it, and your distributed clients would be forever waiting to read or write from network-distant storage. Efficiency would suck, and reliability, too.
Even if you could do it, you wouldn't for issues of information security (which someone else mentioned here, too.)
Re:Would it be worth it???? (Score:3, Insightful)
The security and copyright issues are too big, compared to the low cost (for them) of a render farm. The other costs of a movie outweigh the h
Re:Would it be worth it???? (Score:3, Funny)
Slashdot BLOG advertising... (Score:5, Insightful)
how much did it cost?
Re:Slashdot BLOG advertising... (Score:3, Insightful)
if you want to spend your time rending frames of animations, check out the Internet Movie Project [imp.org]
Re:Slashdot BLOG advertising... (Score:3, Funny)
unschedulable resource (Score:5, Insightful)
That said, I could totally see a use for a 'render pool' catering to independent filmmakers, students, and nonprofits for whom cheap is more important than timely.
Data (Score:5, Interesting)
do you really want this? (Score:5, Insightful)
Never happen, but... (Score:5, Funny)
But they could tell everyone they were, just have a screen saver that pegs the CPU, tells you that you've rendered X frames, and displays a cool screensaver from the movie! :)
Great PR, no loss of technology, lots of pissed off fans, once they realize the truth!
Crackers (Score:2)
I'm thinking something along the lines of Tyler Dur
The reason why..... (Score:5, Insightful)
There are many variables in distributed public computing such as:
*Different CPU capabilities.
*Different OS capabilities
*High/Low use Systems
*People's 'uptime'
*Users leaving the project before its completion etc.
Another risk is that another movie-house could start a production which everyone sees as 'cooler' and your entire userbase decides to up-sticks and render for them instead.
Too much Data (Score:2)
It might reveal too much of the movie.. (Score:2)
Or maybe my computer just happens to render the climactic scene in the movie, and I tell my buddies in Slashdot or wherever.
maybe ... (Score:2)
On the other hand, I can also see why this won't work, as this would be a huge technical support nightmare, the potenti
Webcam (Score:2)
That's MPAA you are talking about... (Score:4, Funny)
It's my movie! MINE! You want a screensaver -- well, pay in DOLLARS for it, you dirty pirate (* by clicking here you agree that your credit card will be automatically charged $0.99 each time your screensaver kicks in)! And note that you are licensed to use MINE screensaver on just machine by just one user and that our DRM system will make sure of that (* fingerprint reader, purchased separately required for system activation and use)!
Thieves, all of you are thieves! Hah, give them movie frames to render... What, you think me stupid?
This is not such a good idea (Score:2)
They're already doing it, aren't they? (Score:2)
How cool would the other way be? (Score:5, Insightful)
Anyone consider local distribution only? (Score:2)
So just how do you plan to do it. (Score:2)
Some shit hot rendering software that probably won't be worth running on joe computer.
Enough[shit loads of] information about the scene to render a frame.
Yeh, great idea, just give me a copy of Maya and a few complete models and textures from Shrek 3 and I'll buy a nice fat pc to render it all on.
Crytographic raytracing (Score:2)
Security issues (Score:2)
You can see how upset the studios have gotten over preview versions of films that get leaked by reviewers or others.
Do you realize just how many gigabytes of... (Score:5, Insightful)
There's an I/O problem (Score:5, Insightful)
You don't have the machine for it... (Score:5, Informative)
Even on high end machines they often do not render a full frame, but a layer of a frame which is then composited with other layers into the full frame. Why? Many reasons but one of them is that even the high end machines don't have enough RAM and the render would take too long (the machine would need to swap).
So aside from the issues of fans returning bogus data, or extracting highly proprietary information out of the client as other threads have mentioned, this would be a real show stopper. Breaking the problem into small enough pieces to be handled by joe-blow's computer would be prohibitive and require tons of calculations to figure out which pieces of textures are actually required for a given piece of rendering etc. It would probably require a compute farm just to manage it!
Rendering is also a lot more complex than you might think, there are render wranglers who manage the rendering queues and look at the outputs... many renders may require specific versions of the rendering software, so a frame that rendered with 2.7.2.1 won't render anymore without errors with 2.7.2.2... so many copies of the software are managed in parallel with the wranglers helping to clean up the errors. How would you manage this in a distributed client environment?
Furthermore most of the proprietary rendering apps are certified against VERY specific platforms, eg. one specific kernel version and build level, specific versions of shared libraries etc.
Long and short is there's a reason why movies cost millions.
Hold it there for a second (Score:3, Interesting)
Giving away CPU cycles so that a multi-million dollar company can improve its product is a wholly different thing.
Re:Hold it there for a second (Score:5, Insightful)
People pay to wear shirt that advertise mult-million dollar companies. : (
-Colin [colingregorypalmer.net]
Cost Cutting? (Score:3, Interesting)
Why bother? (Score:3, Insightful)
From what I've read, Seti@Home works well because users heavily process a small amount of data and return a small solution. If we were processing frames, it would require the user to take in large amounts of data and return even larger results.
Good Luck (Score:4, Interesting)
The last film I worked on, we had anywhere from 800MB to 12GB of data per frame that the renderer had to have. I am talking about compressed renderman rib archives, textures, normal maps, displacements, shadow and other maps.
The data was mostly generated at render time for things like hair and shadow maps, but if it was being distributed, there is no way to do that - they would be transferred beforehand.
Also, there are always many terabytes of data generated by the renderers for each render layer, for diffuse color, specular color, etc.
It is just not feasible to transfer all that data around, and its not like bittorrent or other p2p systems will help much with that since each frame would most likely only be rendered by a few people (for verification).
Also, the model geometry and shaders (and somtimes textures) are closely guarded secrets... In short, if a major film were ever to do somthing like this, everyone participating would need huge (> 100mbit) bandwidth and a LOT of disk space and also be under very tight NDAs.
Duh (Score:3, Insightful)
There's no way a studio could send a scene's model to a compute node encrypted, process it encrypted, store the interim image encrypted, then send the whole mess back encrypted. At some point in processing the information must be in plain computer processable formats.
What that boils down to is that a competing studio could sign up hundreds of compute nodes and get a preview of the story line and animation. Anyone who could gather enough images could piece together clips from the film and release them in full digital format. Imagine a nefarious group of nodes all collecting the images they generate and later piecing them all together in to perfect digital non-DRMed copy of the movie; before release and before the DVD is available.
Hollywood can't stand the idea of people copying DVDs to the internet, could you imagine what they'd think of full film resolution copies of their films floating around? The heads bits: on the walls.
No... this is just a stupid suggestion from the point of view of the studios. At least until there's and OS is produced where a user it prohibited access to certain portions of RAM, and can't intercept the network traffic to/from the box.
Re:Duh (Score:3)
Distributed rendering can be compute-bound (Score:5, Informative)
This is not just tracing all of the rays in the scene.
Bruce
Why it's appropriate for SETI and not for film (Score:3, Insightful)
Stupidity (Score:3, Interesting)
If someone wants me to wear such advertisement-enhanced clothes, they should pay me for the priviledge.
Same with computer cycles. I pay the electricity. If they plan on making money from the product of the cycles I give them, they should pay me.
However, I have no problem giving away free computer cycles to non-profit scientific endeavors.
Umm...No (Score:3, Insightful)
ILM's render farm: The Death Star (Score:3, Insightful)
http://www.linuxjournal.com/article.php?sid=6783 [linuxjournal.com]
People have been saying that even if the studio didn't care about the security issues, there are bandwidth issues that would keep this from really working. There are a few quotes in the article that confirm this: all the rendering machines make a sort of denial-of-service attack on their NFS servers, for example. And the article talks about their VPN, which they call the ILM Conduit; it sends everything double-encrypted with Blowfish. They really are worried about security.
The coolest thing, to me, is that ILM has rolled out Linux all the way across their organization; people run Linux on their desktop computers. When people go home at night, their computers get added to the render farm!
steveha
Render Times (Score:5, Insightful)
He said that for Finding Nemo today, render times were about...7 hours per frame.
More machines and faster processors let you cram much more detail and technology into the same package. Working in commercial advertising, digital editing and graphic workstations are fantastic and powerful...but their advantage isn't speed. We spend the same amount of time making a commercial as 10 years ago...but now we make 7 versions and change it 30-some times along the way. Power gives you the ability to change your mind....and that's a creative force which people gladly pay for.
Bandwidth. (Score:3, Insightful)
In the making of Final Fantasy, it took longer to send the information to the nodes than it took the nodes to process it. That is with dedicated gigabit networking.
The way this *could* work... (Score:3, Interesting)
Secondly: Users cannot see what they have rendered. This is a given, as has been pointed out a thousand times already, this is insane from a security and PR standpoint. INSTEAD, simply let users who participate on a regular basis have access to a private forum, developer blogs, and grant them access to the official PR material slightly before it gets published. It's less cool, sure, but it could work.
As if you don't pay enough for a ticket... (Score:3, Funny)
Internet Movie Project http://www.imp.org/ (Score:4, Interesting)
Bandwidth/Administration Hell (Score:5, Insightful)
Added to that are huge bandwith problems. In order to render a 2K image, you may need dozens of texture maps, some of which may be even larger than 2K because you zoom in or something -- meaning to get a 2K frame back, you're sending the render box probably 10-20 times that amount of data. With a nice gigabit internal network, that's not a huge problem, but shipping them down a DSL line is just not gonna happen.
Why not? (Score:3, Interesting)
Seeing that its my farm... (Score:5, Informative)
We used thousands of processors to render. We had terabytes of storage. It is a large undertaking. Every single frame and element of the frame had to be tracked. It had to be qualified. If something didn't work, we had to diagnose the system and get it back up and running. This is something that is too large of budget for a home brew system to work.
With other distributed systems, there are some checks and balances on the data ran, a way to know if you are sending back somewhat good data. The only way you can tell with this is to visually inspect the end result. If a person has a system that returns a bad slice of a frame, you now have to recreate that slice and track it, because its possible the problem is in the code, in the data files or it was a one time glitch with the system. Not a fun thing to do for hundreds of remote systems that aren't similar.
Render time also varies. It can be 5 minutes to 12+ hours. If a job gets halted, you lose that data, and have to recreate it. This isn't like generating millions of keys. There isn't a second init time before turning out data. At a previous studio, we had scene load times of over 30 minutes before it even started rendering. That needs to be accounted for in how you split up frames. If you have 30 minutes to load (after 45 minutes to download the data) and only render for an hours worth, you are getting a heavy hit on over head.
There are just too many issues with this working in a current setup. Stick to crunching numbers.
-Tim
I'd rather... (Score:3, Insightful)
Further elaboration on the impossibleness of this (Score:5, Informative)
A
And then the textures... They do use lots of procedurals, but they also use lots of 16 bit per channel textures of 4000x4000 for face textures, or even higher. Some people are using tiles if 16 bit tiffs for displacement maps now that equate to like a 100,000x100,000 image for displacement maps, because the accuracy requirements for close up renders are so bloody high. That can be many many gigs of data there.
And, if you're raytracing like in Shrek 2, then you need to have as much of that data in RAM at once, or else render time spirals out of sensibility, unlike scanline renderman where swapping is easier, because the rays bouncing throughout the scene make scene divisions more difficult (but still possible).
I work with 4 gigs of RAM and we can just barely render 6 million polygons + a few 4k displacement maps all raytraced at once (in windows unfortunately). And, when we render sequences and stuff, we often almost kill our network because distributing all this data to just 20-30 rendernodes is pretty tough (and how would that scale to a big renderfarm with thousands of rendernodes...)
So, yeah, like everyone else is saying, bandwidth limitations and that people running the screen saver probably don't have the hardware and OS to really run 4+ gigs of RAM, this Shrek@home idea seems rather unlikely. It would be cool though, if it worked...
Hooray for my totally unoriginal post!
You won't see anything: cryptographic protocols... (Score:3, Interesting)
The calculations are done in encrypted values and returned as such. The host can then decrypt the result.
This sounds pretty amazing but consider addition as a starter: The host uses a one-time-pad for each number and XORs them. The client adds the encrypted numbers. When you add the numbers, the host only needs to XOR all the keys on the result and gets the true result! The client, though, knows NOTHING about the true values (the protocol is information theoretically secure), as the XOR turns them into "signal noise".
I imagine, though, that the effort of implementing this probably outweighs the benefits for a project like rendering a movie. But for truly mission-critical data, it may be worth it...
Is this really likely? (Score:3, Insightful)
Too Huge A Job (Score:4, Informative)
First off, what is rendered by the computer is not what you see on screen. There are perhaps a dozen object layers that are rendered individually and composited in the postproduction phase. So, for example, Shrek might exist on one layer, the donkey on another, the ground on a third, some foreground objects on a fourth, several layers of background objects on the fifth through tenth, et cetera.
Now, each object layer will also be split into several render layers, for color, shadows, specularity, reflectivity, transparency, and probably several others that I can't think of right now. It is not an exaggeration to say that a single frame of a completely CGI scene can be made up of upwards of fifty individual frames, all composited together in post.
Why is this done? First off because it's easier to edit and change one of these layers and re-render it, than to change and re-render the entire scene. If Shrek is too gruesomely gleaming, but Donkey is just fine, you just have to edit Shrek's specular layer. This is easilly done in any professional postproduction software package. Alternatively, if it's completely wrong, you just have to re-render that specific layer -- saves a LOT of time! Some post tools are extremely powerful, which makes rendering to object/render layers very appealing.
Now, while you could conceivably do Shrek@Home, you would need a fairly large render program -- and you're already distributing a very powerful program, which the people who wrote it would be very uncomfortable doing. Secondly, the processing power in even high-end PCs is going to be jack compared to what they have in render farms, and they have a lot more of those computers besides. Rendering is very processor-intensive, too. It's a complex mathematical process that can take hours. Many computers will chug along at 99% load on the processor because they HAVE to.
Add to the fact the stake in the heart of this idea: that the producers want reliability first and formost. An in-house render farm, or even renting time at a farm (an idea I've sometimes played with) that signs and seals and delivers is going to be reliable and dependable or they will know exactly whose head needs to roll. If you start having half the internet rendering the million or so frames of your blockbuster, who do you hold accountable when the deadline comes and you're short 1000 various random frames?
right... (Score:3, Insightful)
next i won't be able to play the dvd legaly (which i had to pay for again) on my linux box.
can't wait to start...
Why? (Score:3, Funny)
I wouldn't. What a waste.
Need rapidly fading (Score:4, Informative)
What about a buy-in project? (Score:3, Interesting)
Re:Distributed hacking? (Score:2, Interesting)
Don't like how Matrix Revolutions ended? Just load up the "Smith kills us all" branch and choose your own adventure!
Re:Get real. (Score:2)