Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Science Technology

Algorithm Clones Facial Expressions And Pastes Them Onto Other Faces 31

KentuckyFC writes Various researchers have attempted to paste an expression from one face on to another but so far with mixed results. Problems arise because these algorithms measure the way a face distorts when it changes from a neutral expression to the one of interest. They then attempt to reproduce the same distortion on another face. That's fine if the two faces have similar features. But when the faces differ in structure, as most do, this kind of distortion looks unnatural. Now a Chinese team has solved the problem with an algorithm that divides a face into different regions for the mouth, eyes, nose, etc and measures the distortion in each area separately. It then distorts the target face in these specific regions while ensuring the overall proportions remain realistic. At the same time, it decides what muscle groups must have been used to create these distortions and calculates how this would change the topology of the target face with wrinkles, dimples and so on. It then adds the appropriate shadows to make the expression realistic. The result is a way to clone an expression and paste it onto another entirely different face. The algorithm opens the way to a new generation of communication techniques in which avatars can represent the expressions as well as the voices of humans. The film industry could also benefit from an easy way to paste the expressions of actors on to the cartoon characters they voice.
This discussion has been archived. No new comments can be posted.

Algorithm Clones Facial Expressions And Pastes Them Onto Other Faces

Comments Filter:
  • This approach should be used to reduce the necessary bandwidth in video conferencing situations!
    • by Khyber ( 864651 )

      You probably generate and transmit more procedural data versus a video stream, sorry to break it to you. You also introduce lag, which many of us don't like (and this would suck balls for those using sign language to communicate via video chat.)

    • by jblues ( 1703158 )
      This approach was used in Vernor Vinge's 'A Fire Upon the Deep' published in 1992. In the book, bandwidth between faster than light network nodes is very limited, so an initial capture of the user is sent followed by facial expression data. . . its a pretty awesome book actually.
  • The term " Permanently shitfaced"

    nice
  • now anyone can see their Mona Lisa smile!

  • I do this all the time on Facebook to my friends' pic uploads. Except it's more like: http://manbabies.com/ [manbabies.com]
  • Easy for the Chinese researchers, since... Chinese Faces All Look Alike.

    Bada-bum-tish.

    Thank you, try the waitress, tip the salmon, something-something "all week".

    (And for the humour-impaired, I certainly don't think "they all look alike", I live with a Hong Konger... Just playing with an old trope.)

  • Paging Keanu Reeves, Kristen Stewart...

  • The film industry could also benefit from an easy way to paste the expressions of actors on to the cartoon characters they voice.

    The Adventures of Mr. Incredible (with Commentary) [youtube.com]

    The animator of a feature film begins with a study of an actor's vocal performance, facial expression and mannerisms but the end product will be shaped his own imagination and interpretation of the character

    One reason for Disney's recent string of hits is that the studio casts its nets widely and avoids using overly familiar celebrity voices in predictable ways.

  • by tbttfox ( 1885852 ) on Saturday March 14, 2015 @03:24PM (#49257479)
    As a professional working on the technical side of 3d animation, I can tell you with 100% certainty that this is nothing new. This technique of scaling the individual parts of the face to match proportions, then applying a muscle simulation is used everywhere. It's how high fidelity, realistic facial mocap retargeting works.
    Another good technique is to sample FACS (Facial Action Coding System) style muscle activation data from the source head and just add the individual deformations together on the target head. The proportioning is already taken care of by the definition of the FACS shapes relative to each head.
  • I believe I saw a demo of this technology back at Uncertainty '99 when Matthew Brand presented a paper titled "Patter discovery via entropy minimization" (TR-98-21 from MERL ("A Mitsubishi Electronics Research Laboratory")). The demo was a video of an infant who started to lecture the audience on the technique. I was quite impressed. I recently found a copy of the paper via google.

  • None of the example pictures in the article and the paper @Arxiv show target/source person flashing a toothpaste smile. Does this mean their algorithm's only good for Mona Lisa? Maybe that's their secret (limitation)?

    • by gnupun ( 752725 )

      The target person's smile looks completely computer-generated and not natural at all. They are very far from a toothpaste smile. Real actors in movies need not fear Hollywood replacing them with no-acting-talent pretty faces.

  • :-)

  • GMOD FacePoser [google.com] in addition to GMOD w/Kinect [garrysmod.com] would be a blast!

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...