Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Programming Software Technology

Building 3D Models On the Fly With a Webcam 93

blee37 writes "Here is an excellent video demonstration of a new program developed by Qi Pan, a graduate student, and other researchers at the University of Cambridge. The 'ProFORMA' software constructs a 3D model of an object in real time from (commodity) webcam video. The user can watch the program deduce more pieces of the 3D model as the object is moved and rotated. The resulting graphics are of high quality."
This discussion has been archived. No new comments can be posted.

Building 3D Models On the Fly With a Webcam

Comments Filter:
  • by headkase ( 533448 ) on Friday November 27, 2009 @03:29PM (#30247998)
    With open-source rendering images already well established and continually improving that only leaves the content areas under-developed. This method will allow anyone with an object to digitize it. This will enable people to take that content and then mix it in virtual environments. Throw in some voice-synthesis software, some directing software, and a million monkeys hammering away at plots then Hollywood as an institution is dead. This is another piece, the others will fall into line as well. It is ironic in that in one of the Civilization games, discovering the Internet invalidates the Hollywood Wonder.
    • Awesome, All I need to recreate Starwars now is to visit the nearest starport and record a video of a TIE fighter with my phone and I'll be good... Oh wait!
      • Re: (Score:1, Interesting)

        by Anonymous Coward

        Actually while you laugh it could allow all sorts of fun stuff because you can model that tie fighter or spaceport by hand if necessary, then visualize and convert it to a 3d model. I don't know about you, but there's lots of things I can do easier with raw materials and my hands that would look good as a model that if I was to model in 3d would take a lot longer to produce...

    • There's one more key we're missing - the ability to render humans realistically. We can manage just about everything else, but until we can make a virtual John Wayne that looks like John Wayne and not a wax mannequin, we're not going to see Hollywood abandon "talent".

      Of course, once we can do so, the next step will be to "improve" the stars - start with a virtual Natalie Portman (for example), and then "tweak" her for further fanboy appeal.

      • Re: (Score:3, Interesting)

        by McNihil ( 612243 )

        Or why not let the viewer choose who plays that part... Angelina Jolie with those perky ones from the Tomb Raider movies for instance. How about watching Cassablanca as yourself as Bogart? Now how about being Dekkard in Blade Runner? The only thing that is needed is the motion capture of believable performances that's all.

        • by Thansal ( 999464 )

          That actually is a rather awesome idea. Reminiscent of the Sprawl trilogy's Simstims, but more likely to happen some time soon (good 3d scanning/modeling of a human vs wetware).

    • Re: (Score:3, Funny)

      by Chyeld ( 713439 )

      I take it you aren't used to using Poser or Blender, or any other related 3-d software and thus don't know the joy of: "You STUPID PROGRAM! I just want her to walk down the stairs! Why are her arms doing that! NO! NO! NOO!!!! Stop floating down the stairs and walk! Why is your hair clipping through the wall, why is your hair even moving that way! STOP IT!"

      Hollywood's death knell might be sounding. But it's got a few more good decades in it left before we need to morn for it.

      • dear god, why did you mention blender. that is not a 3D modeling program, its a psychological torture device.
        • To hell with Blender, why did he mention Poser? It is the photoshop-lensflare-filter of the 3D world. Something which has it's place but which is used 99% of the time to let amateurs run around shouting "look at me I'm an artist too!"

      • by mldi ( 1598123 )

        I take it you aren't used to using Poser or Blender, or any other related 3-d software and thus don't know the joy of: "You STUPID PROGRAM! I just want her to walk down the stairs! Why are her arms doing that! NO! NO! NOO!!!! Stop floating down the stairs and walk! Why is your hair clipping through the wall, why is your hair even moving that way! STOP IT!"

        Hollywood's death knell might be sounding. But it's got a few more good decades in it left before we need to morn for it.

        Hilarious you bring that up. Did you ever look at the model of Big Buck Bunny [bigbuckbunny.org]? It's like his face is sucked in down his throat, just so that when it renders, it looks like they way they want it to. It's horribly fucked up for anything more complex than snowmen or giant walking stick men.

    • Ah but in the near future you will need to get a copyright license to make a picture with a model taken from a real object. Soon you won't be able to make a movie without getting a "RAND" license for every object that appears in your movies.

      • Ah but in the near future you will need to get a copyright license to make a picture with a model taken from a real object.

        Try using a glass Coke bottle, a Rubik's Cube or an Igloo cooler in your flick and see what happens.

        • Wow time flies like an arrow. So yeah, it's just like you can't play ancient music for free because the recording was done in the last century or you can't post an ancient text because the scanning/transcription was done within the last century, we live in a cultural deadlock where creating is a risky business where lawyers are advised.

    • by RAMMS+EIN ( 578166 ) on Friday November 27, 2009 @04:45PM (#30248674) Homepage Journal

      In theory, we can make good computer games, too.

      But how many open-source games can you name that have great graphics? And how many closed-source games with great graphics are there?

      I don't think Hollywood is dying just yet.

      • Re: (Score:3, Interesting)

        I think the OP was implying that this new technique might be useful for making the graphics for these things. Since it has only just been created it would indeed be very surprising if open source games had used it to make great graphics already.

        • Really? Where do you think that people are magically going to get real life versions of the wanted 3D characters? Besides, anyone good enough to clean up and make these kinds of models usable is good enough to just be able to model it in the first place. Faster too.

          • Yes, you're right because the only 3D models used in games are fantastic monstrous characters. You never see things like chairs, tables, lamps, cars, and refrigerators. There's simply no need for them! And being able to easily create models of such things would be equally useless, especially when anyone in the world would suddenly have the ability to contribute large numbers of everyday objects to some sort of global repository.

            Sarcasm aside, at the very least maybe this will spell the end of the infamous C

            • Yeah, but those objects are easy. Like, dumb easy. It takes far, far less time to make a fridge the traditional modelling way than it takes to clean up data like this. And you can make any fridge you want instead of having to find a real fridge and haul it out, light it and scan it.

              Object scanning systems have been around for over twenty years in various forms and costs. The fact that even the people who have been able to afford it don't use them should tell you a lot about how applicable this tech actually

              • Okay, how about instead of using a live camera view, you record footage of a fly-by over a city?

                And what do you want to bet Google would pee themselves if they were able to use this to, say, replace streetview with 3D city models generated through such technology? Instead of paying someone to drive a car around which takes pictures every so many feet, they'll pay someone to drive a car around which generates a fully-textured 3D model of any city around. Suddenly, streetview graduates from Myst-style clickth

    • With open-source rendering images already well established...that only leaves the content areas under-developed.
      Throw in some voice-synthesis software, some directing software, and a million monkeys hammering away at plots then Hollywood as an institution is dead.

      The geek needs a million monkeys.

      Hollywood gets by with a handful of men men like John Lasseter, Andrew Stanton and Brad Bird. In sound design, a Ben Burtt.

      Digitizing the prop is trivial.

      Knowing which prop to use - and how to use it is not.

      There

    • by vikstar ( 615372 )

      That's because Hollywood is just crap rendered in good graphics. There are international studios that produce high quality cinema which will live on due to the content, and not a glossy wrapper.

  • I s there any open source software that can generate a 3D model from photos? As far as I can see the source code of proforma is closed. http://mi.eng.cam.ac.uk/~qp202/my_papers/BMVC09/ [cam.ac.uk]
    • Well, if the paper is well written, implementing it as a GPL project won't be too difficult.
  • by Anonymous Coward

    Did the Slashdot contributor discover TFA with a spider?

  • by kbob88 ( 951258 ) on Friday November 27, 2009 @04:09PM (#30248320)

    I can just see it now -- anyone who can get a bit of video of you can create a 3-D models of your face and body, and then do anything with the likeness. When rendering gets really good, this could be a bit embarrassing. Instead of 2D retouched photos of celebrities and politicians, we'll be seeing hacked up 'animated' (but realistic) video of them doing all sorts of wild stuff. Well, it might be a boon to the porn industry, at least in the short-term before the rendering software becomes available to consumers.

    • It is still very difficult to animate the model in a realistic way so you would need a bit more than this program. I can see it happening in combination with other footage which you could capture movement from then change the models to the celebrities.

      • by hitmark ( 640295 )

        have a chat with hollywood, they have worked on this for a while.

        like say 1.000.000 agent smiths ;)

        • Yeah, that only took a team of dozens of people plus a crapload of body double actors and months of work. And, despite the fact that it was almost 10 years ago now, it wouldn't be that much easier to do today. Well, a scene of a bunch of duplicates standing around would be fairly simple but they have been doing those for decades. The only significant difference between making the opening fight scene from Matrix2 then and now is that advances in hardware would cut down the final render time.

      • Yes, the export format is important to this...but I only use gmax.

    • If you're old enough you can remember "The Running Man" starring governator which did exactly that to show his character as having lost the battle.

  • It may help the modeling process if chroma keying could be used to make the camera ignore supporting objects like a turn table or a hand that's holding the object. Another improvement could be to automatically cut out excess vectors and triangles. It wouldn't be too difficult (for someone who can make this type of software) to determine that the plane that makes up the side of the demo building is virtually flat and reduce the complexity to two triangles.

    One of the key limiting factors to amateurs making

  • 3D vision for robots (Score:4, Interesting)

    by cptnapalm ( 120276 ) on Friday November 27, 2009 @05:31PM (#30249188)

    I was thinking about robots one day and I was wondering why those who work on computer vision didn't do something like this. Instead of trying to get the machine to understand the analog world, why wouldn't it be better for the machine to have an internal representation of the world by making a 3d map? Quake 3 CoffeeShop, if you will.

    The idea I had was that the vision system creates a 3d map with entities, mapped from the vision system as well, inside. The AI works within the 3d representation of the world. If the AI wants to move from A to B, it signals the body controlling subsystem to start walking. When the 3d representation, being informed by the vision system, tells the AI that it is at point B, then the AI signals to stop walking.

    Hardware constraints not withstanding, is this model any good?

    I'm just a lowly, early middle aged novice C programmer who has never actually done anything with robotics, so if what I said made no sense or is obviously idiotic, I do understand that my ideas are comin' outta my ass.

    • by KalvinB ( 205500 )

      What was created here could probably be extended to look at two web cams simultaneously to calculate the 3D space rather then relying on two separate images from one camera.

      What this software is doing is looking at the rotation of an object rather than displacement. So it may need some other adjustments besides just using two cameras to get two source images at once.

    • That's actually how my idea for a WoW bot worked...

      I see a use for this technology.

      • The idea that it could be implemented entirely in software also occurred to me. The vision system would be different, of course, with it taking the video from a game (WoW or Q3A) and translating that into a generic 3d engine which the AI can use. Instead of sending it to an AI-to-Robot system, it sends it to a AI-to-HowThisGameWorks layer which sends the info to the game.

        I hope that made sense.

    • There are a few kinks to work out with this model, but essentially, it could work. Specifically the example they showed would model an object, and not the world around it. So the algorithm would have to be reworked to map the world, and like someone else mentioned, would probably use 2 cameras (which means its compiling twice the data, which means "Real time" might not be so fast)

      There is also the major issue with dynamic lighting effects. Since lighting is the primary use of how to model something in 3D, y

      • Light is also not something I thought too much about; it seems to be above my intellectual pay grade.

        Could a basic implementation be done in just software like this: Take a 3d game engine with a map and a model. Attach two "player's" eyes to the face of the model so there are two POVs instead of the usual FPS's one. The output goes to the video processing system (perhaps on a different machine), which creates a new 3d map with entities based on what the video system came up with.

        Idea of it being that more

    • Re: (Score:3, Informative)

      The idea is not stupid, but it also isn't new [wikipedia.org]. It is just turns out to be a little harder to get working in practice than you might think..

      • Thank you! When I did my searching for something like my idea, I couldn't find anything. Obviously my Google-fu needs some work.

        With regards to how easy it would be, my estimate was that it was beyond my abilities. So it still is, but just way way beyond my abilities.

    • by hitmark ( 640295 )

      thanks for reminding me, as i think i read that research have shown that this is how we humans operate.

      as in, we build up a internal model of ourselves and whats around us, and keep refining said model constantly.

      thats why we develop habits and preferences, as that means the models do not have to change.

      this includes our own body btw, and is the probable cause of phantom limb experiences.

    • I actually did this for my thesis. I built a quake like 3D model of the environment using an expensive laser range scanner. The robot has enough geometric and photometric information it needs to perform localisation and path planning. This type of problem where you have the map before hand is generally called "global localisation" and is easier than the Simultanous Localisation and Mapping (SLAM) problem, where you don't have a map prior.
    • by Animats ( 122034 ) on Saturday November 28, 2009 @03:08AM (#30252992) Homepage

      That's called "simultaneous location and mapping", and in the last five years, good algorithms have been developed and quite a few systems are more or less working. Search for "Visual SLAM".

      The Samsung Hauzen vacuum cleaner uses Visual SLAM. There's a video. [youtube.com] This is way ahead of the blundering Roomba.

  • by Frans Faase ( 648933 ) on Friday November 27, 2009 @06:35PM (#30249976) Homepage
    There seems to be a huge gap between these kind of academic projects and the commercial available programs. I have come across several commenrcial applications that can do these kind of things, but these applications cost at least a 1000 dollars or even more. And then there are all these academic projects (going on for at least two decades), which present nice video's and papers, and sometimes release some software. But when you look at the software, you discover that you first have to download nine other package and compile the whole thing and what you get is some kind of script you have to run, with all sorts of command line options. But sofar, I have never found an application with a solid interface on the level of the Gimp or Blender for the matter of the fact. I find this rather strange. I am almost getting the impression that some of the results are sold to the developers of the commercial packages.
    • by Telvin_3d ( 855514 ) on Friday November 27, 2009 @08:24PM (#30251118)

      The major reason that these types of programs don't get expanded into commercial products or bought and integrated into existing products is that they are cute tech demos but not particularly real-world interesting.

      Almost without exception anything simple enough for these types of reconstruction programs to handle is too simple to bother with. The paper church in the demo video for instance. The final wire-frame product is, sadly, crap. Neat and interesting crap but still crap. There are at least 3 times the polys that the form needs and almost all of the significant edges are in the wrong place. In the time it would take to clean up the data into something worth using I could build a better model form scratch including textures.

      There are perhaps some very niche uses for this in terms of augmented reality. It could be integrated into a game or chat program to give a more realistic version of those make-an-avatar-from-your-webcam gimmicks that seem to gain attention every once and a while. If this guy has developed some very good algorithms he might get the interest of some of the match-moving software companies like Syntheyes.

      But the reason this kind of this never shows up in profesional 3D packages is that if you are good enough to be using the software professionally you are good enough not to need these kinds of crutches. It's the 3D equivalent of Dreamweaver's auto-generated spaghetti code.

      • by snadrus ( 930168 )
        Simplification is easy: answer "if this point was missing, what's the angular difference lost?" and if it's below a threshold then do it.
        Then area thresholds could be set, or more logic to do it for you.
      • by HigH5 ( 1242290 )
        "Genius is one percent inspiration and 99 percent perspiration." Thomas Edison
        I think that the 99% percent is often the problem with these projects. You come up with something, make a proof of concept, but it takes a lot more work to perfect it.
      • by Geminii ( 954348 )
        If you are not good enough to be using CAD software professionally than you are 99% of the new market this thing just created.
    • There seems to be a huge gap between these kind of academic projects and the commercial available programs.

      Indeed. Check out the work of Volker Blanz [mpi-inf.mpg.de] . He was producing amazing 3D models of celebrities from photographs a decade ago. Yet you can't get anything remotely that good today (I've used Facegen etc).

    • Sadly anything useful from academia (not implying that this is) is spun off into private companies, 9 times out of 10. This despite the fact that it's mostly developed on public money.
  • This is not new (Score:2, Informative)

    by mapuche ( 41699 )

    I haven't read the article yet, but there's already a program doing this with cheap cameras, version 1.0 was free:

    http://www.david-laserscanner.com/

  • People interested in this area may find the 'Motion Capture' Yahoo group useful.

    Its website is located here:
    http://movies.groups.yahoo.com/group/motioncapture/ [yahoo.com]

    A recent interesting message from the group (edited to evade ./ Junk filter):

    ---------- Forwarded message ----------
    From: Brad Friedman
    Date: Sun, Nov 15, 2009 at 9:35 AM
    Subject: [motioncapture] releasing some optitrack open source software and code
    To: mocap list

    Hey all.

    Been a while. I've been rather busy with other things.

    I'm releasing some OptiTrac

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...