Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Encryption Security

On the (Im)possibility of Obfuscating Programs 216

sl956 writes: "We all know that anybody using the words 'tamper resistant' to describe a software-based solution is incompetent at best. But some of the big players in the DRM field are believing in software-only protection schemes (see Cloakware, Hitachi, IBM or Intel). A mostly unnoticed paper presented to CRYPTO'01 (Santa Barbara, CA, August 19-23, 2001, LNCS vol.2139) *proved* the impossibility of efficiently obfuscating programs. It is the mathematical proof of the impossibility of a software-only DRM system on an untrusted client such as a PC. There are also a lot of interesting theoretical side-effects. You can read the html abstract here, or the postcript full paper here." The paper is from last year, but that doesn't make its conclusion less interesting. (Of course, even hardware isn't always all that secure, either.)
This discussion has been archived. No new comments can be posted.

On the (Im)possibility of Obfuscating Programs

Comments Filter:
  • by quantaman ( 517394 ) on Friday March 01, 2002 @03:23AM (#3089099)
    but I found the paper sufficiently obfuscated!!
  • they are all off patching their php software
  • sssca (Score:2, Insightful)

    by 7-Vodka ( 195504 )
    well, if the sssca gets passed, I'm not gonna be the one trying to break any tamper proof software :(
  • proofs (Score:4, Funny)

    by Anonymous Coward on Friday March 01, 2002 @03:25AM (#3089107)
    i have a mathematical proof that shows the impossibility of mathematical proofs, but i can't get it past the lameness filter.
  • software protection (Score:4, Informative)

    by ardiri ( 245358 ) on Friday March 01, 2002 @03:26AM (#3089109) Homepage
    as a developer myself, i spent a bit of time messing around with protection schemes for applications i wrote for the Palm OS platform. i wrote a paper on it, which was made available at PalmSource 2000 and is available here [palm.com]. i enjoyed understanding the inner workings of how they did it - so, i documented it. however, i knew that there was no beating them - the question remained.. how long would it take for them to crack it? does it give me some selling breathing space? (more time = more sales) :P
    • by Troed ( 102527 )
      As a young boy I cracked games on the Atari ST .. making a long (and fun) history short, a member of our group was p*ssed because a software house nearby him didn't want his protection-system since he was a cracker, insteady they bragged about the "unbreakable" new system they had and what game they would put it on.


      ... you already know the ending. I cracked it completely in 6 hours and he went back to them with a cracked copy later.


      The only protections I know of that indeed have given "breathing space" involved hardware dongles. No one used pirated copies of Cubase on the Atari ST because they didn't work as they should .. but as soon as versions without dongles appeared on other platforms they were cracked completely in an instant.

      • by ardiri ( 245358 )
        • The only protections I know of that indeed have given "breathing space" involved hardware dongles. No one used pirated copies of Cubase on the Atari ST because they didn't work as they should .. but as soon as versions without dongles appeared on other platforms they were cracked completely in an instant.
        if you use the hardware dongle for "proof of purchase" - just need to patch the check to the serial port :) but, a more reliable method would be to actually have program code *inside* the dongle that is downloaded at runtime to the memory space of the machine and is vital for the execution of the program :) that's a bit harder to "crack" - but, not impossible.. application needs more modification *g*
        • Quite trivial to crack, since the machine can then easily copy the code. The only uncrackable software is one that runs on its own operating system on its own hardware that is physically secured in a way that prevents tampering.
          • I am not going to bother looking it up, but there is a company from either Finland or Sweden that makes a product that allows you to run essential parts of your code on a smart card.

            They even have a way for you to distribute the source code with the essential parts extracted, compile it and run it assuming that you have the card for the program.

            Since I am lazy I am not going to use Google to look it up, but they were at the last CTST conference selling their system.

        • by mpe ( 36238 )
          if you use the hardware dongle for "proof of purchase" - just need to patch the check to the serial port :) but, a more reliable method would be to actually have program code *inside* the dongle that is downloaded at runtime to the memory space of the machine and is vital for the execution of the program :) that's a bit harder to "crack" - but, not impossible.. application needs more modification *g*

          Not that much more modification. All you'd need to do here would be to tack the code from the dongle onto the programme and have the downloading routine look at a certain memory address rather than a peripheral port...
          The only way of making this moderatly hard is to have the application run completly standalone, in other words in must contain it's own operating system, preferably on unmodifiable hardware, in which case you'd end up with something more like a games consome than a regualr computer.
      • by uebernewby ( 149493 ) on Friday March 01, 2002 @04:50AM (#3089226) Homepage
        I think you'll find dongle-protected apps such as CuBase, 3D Studio Max (up to v.3) et al have been available cracked for a long time.
      • And exactly what prevents you from taking out the function that checks the dungle?
        It would require a dungle with a couple of vital parts of the program to work, and even then, assuming you've one legal copy, you could probably find a way to copy from it.
        A useful way would require telling the CPU to fetch the instructions from the dungle, with no way for instructions outside the dungle being able to read into the dungle's adress space, only to jump into it and start executing.
      • The only protections I know of that indeed have given "breathing space" involved hardware dongles. No one used pirated copies of Cubase on the Atari ST because they didn't work as they should .. but as soon as versions without dongles appeared on other platforms they were cracked completely in an instant.

        It's perfectly possible to either find the part of the code which looks for a dongle and alter it to always return a "dongle present" result. This is where obscurtated code could help, but it's especially difficult to write code which is both obscurated and bug free. Also if the result is in anyway obvious as obscurated code you really need to obscurate the entire programe. Otherwise you effectivly indicate which bit of the program contains the "security"... (In the same way that if you encrypt emails you want to routinely use encryption, not just for the messages you are worried might be intercepted.)
        Also it's undoubtedly possible to reverse engineer and copy the dongles themselves.
    • by CaptainSuperBoy ( 17170 ) on Friday March 01, 2002 @07:21AM (#3089417) Homepage Journal
      Was that before or after you spent some time messing with trojans [zdnet.com]? Yeah you're not going to live that one down. Don't expect me to buy any of your software any time soon.
  • by CowbertPrime ( 206514 ) <[sirmoo] [at] [cowbert.2y.net]> on Friday March 01, 2002 @03:27AM (#3089111) Homepage
    I think the conclusion is at best, obfuscated...
    Yes you can say that obfuscatable programs can not be /generalized/ but that doesn't not preclude obfuscation under very specific conditions. Although they formalized a counter-example to an already special case, which precludes generalization of the concept, that does not mean other specific cases do not apply.
    • by mgv ( 198488 ) <Nospam...01...slash2dot@@@veltman...org> on Friday March 01, 2002 @03:34AM (#3089122) Homepage Journal
      Although they formalized a counter-example to an already special case, which precludes generalization of the concept, that does not mean other specific cases do not apply.

      Of course, mostly the DRM people are interested in making things sufficiently hard to do, not impossible.

      They are driven by profit, not purity of outcome, so if a scheme costs more to run than it delivers, it will not be used.

      Likewise, tweaking a DRM system to maximise returns involves evaluating the cost of the DRM system itself, and the hassle it gives to legitimate customeers. Just having a 100% success rate means nothing if you only have 2 customers left.

      Michael
      • Exactly. You will never stop all people from breaking it, but as long as you make it sufficiently hard to do, most people won't bother trying to break it. It's kind of like mail in rebates versus instant ones.. even adding a simple step like mailing a form in for your rebate will greatly reduce the number of people who actually bother going for the rebate.

  • *proved* the impossibility of efficiently obfuscating programs.

    Obviously they have never heard of IOCCC [ioccc.org] :-)
    • Obviously they have never heard of IOCCC [ioccc.org] :-)

      Yeah, and they've never read any of my Uncle Nic's perl... :0)
    • Re:Are you sure? (Score:3, Informative)

      by hornet@ch ( 134771 )
      Oh no, you're wrong, they've heard of it! :-)

      Look at page 3 of their paper, they published a slightly adapted version of the IOCCC Contest winner of '98. They of course adapted it to the paper, therefore I suppose it lost most of its obfuscated features :).

      And in the references list on page 37 you can also find a link to http://www.ioccc.org ...
  • by gnovos ( 447128 ) <gnovos.chipped@net> on Friday March 01, 2002 @03:31AM (#3089118) Homepage Journal
    ...because it means that the ONLY recourse for these money hungry bastards in the "content industry" (is legal prostitution considered and "industry"?) is legislation. As long as they can be fooled into thinking that Mr. Wizbang's new ROT-14 encryption scheme is uncrackable by all but the most devious of minds, they will relax and let themselves sink slowly into the mire of contentment that will someday be thier graves. But when people come around spouting off how impossible it is to have DRM on "untrusted" machines, the only solution is legislate trust into all the machines in the most draconian and Brotherly way possible.

    PLEASE somone start publishing papers on how all digital content can be protected by XORing it with the number 0x42 and will be secure as such for decadeds to come.
    • As long as they can be fooled into thinking that Mr. Wizbang's new ROT-14 encryption scheme is uncrackable by all but the most devious of minds

      What they need is my tredectuple ROT-14 encryption.

    • by Alsee ( 515537 ) on Friday March 01, 2002 @04:19AM (#3089186) Homepage
      XORing it with the number 0x42

      The correct value should be 0xDEADBEEF.

      -
    • I disagree. I think it would be better if the companies put all their effort into legislative efforts, rather than impossible technological ones.

      Why? Every time some new copy-protection technique is implemented in a device, that device becomes more complex, and therefore more expensive. This expense alone hurts consumers. The other problem is that CP techniquess, by their impossible nature, cannot be made public and still remain secure, raising the barrier to entry into the market of player technology (see DeCSS and Linux DVD players), again hurting consumers. Copy-protection schemes are also unethical, since they discriminate against the law-abiding and less technically-skilled population. However, many people do not consider themselves technically competent enough to criticize highly successful technology companies such as Microsoft over technical issues, they simply accept CP technology with little objection.

      However, many more people understand legal matters fairly well. For example, "I can use my stereo to record the songs from my favourite CDs onto a cassette so I can play them in my car, but the greedy record companies want to make that illegal so they can make me pay again for a casette of all the songs want to sell me? Screw that!"

      So my point is that I think the legal campaign these companies will make will fail, especially as it becomes more and more obvious that it is nothing but blatent corruption and power-grabbing, but technical measures are narrowly understood and simply harm the industry without anyone noticing.

  • Mozart (Score:5, Interesting)

    by rjamestaylor ( 117847 ) <rjamestaylor@gmail.com> on Friday March 01, 2002 @03:34AM (#3089123) Journal
    In my Music Appreciation (Apprehension?) class I learned that as a young boy Mozart broke a vaulted DRM of his day by simply attending a concert in an Italian church. The mass that day was kept under lock and key and would only be played once a year; all copies of the music were kept secret. What Mozart did is hear the mass (once) and then went home and wrote the entire score as if he was copying the original documents yet only assisted by his memory. His scoring was so good he was accused of stealing the score from the church. (Forgive my poor recollection of Mozart's superb recollection ancedote...)

    There will always be a Mozart to break the DRM of publically performed (or distributed) works. DRM is a way of controlling the sharing of some piece of work. In reality, the only way to perfectly safeguard the rights is to not share the work -- or trust people. Hmmm...

    • Re:Mozart (Score:2, Informative)

      Not quite correct. Indeed, the Vatican did keep the score of the Allegri Miserere secret. Mozart didn't quite get it right on the first listening though - it was three.

      Essentially correct though. I've often wondered if I'm violating copyright by listening to songs and working out the chords on the guitar. I think my playing is so bad that I can get away with it though.
    • Re:Mozart (Score:5, Informative)

      by Oink.NET ( 551861 ) on Friday March 01, 2002 @05:37AM (#3089281) Homepage
      Here's an exerpt from this [classical.net] article (I like the "effectively ending the pope's monopoly" part):

      The next famous story concerning the Miserere involves the 12-year-old Mozart. On December 13, 1769, Leopold and Wolfgang left Salzburg and set out for a 15-month tour of Italy where, among other things, Leopold hoped that Wolfgang would have the chance to study with Padre Martini in Bologna, who had also taught Johann Christian Bach several years before. On their circuitous route to Bologna, they passed through Innsbruck, Verona, Milan, and arrived in Rome on April 11, 1770, just in time for Easter. As with any tourist, they visited St. Peter's to celebrate the Wednesday Tenebrae and to hear the famous Miserere sung at the Sistine Chapel. Upon arriving at their lodging that evening, Mozart sat down and wrote out from memory the entire piece. On Good Friday, he returned, with his manuscript rolled up in his hat, to hear the piece again and make a few minor corrections. Leopold told of Wolfgang's accomplishment in a letter to his wife dated April 14, 1770 (Rome):

      "...You have often heard of the famous Miserere in Rome, which is so greatly prized that the performers are forbidden on pain of excommunication to take away a single part of it, copy it or to give it to anyone. *But we have it already*. Wolfgang has written it down and we would have sent it to Salzburg in this letter, if it were not necessary for us to be there to perform it. But the manner of performance contributes more to its effect than the composition itself. Moreover, as it is one of the secrets of Rome, we do not wish to let it fall into other hands...."

      Wolfgang and his father then traveled on to Naples for a short stay, returning to Rome a few weeks later to attend a papal audience where Wolfgang was made a Knight of the Golden Spur. They left Rome a couple of weeks later to spend the rest of the summer in Bologna, where Wolfgang studied with Padre Martini.

      The story does not end here, however. As the Mozarts were sightseeing and traveling back to Rome, the noted biographer and music historian, Dr. Charles Burney, set out from London on a tour of France and Italy to gather material for a book on the state of music in those countries. By August, he arrived in Bologna to meet with Padre Martini. There he also met Mozart. Though little is known about what transpired between Mozart and Burney at this meeting, some facts surrounding the incident lead to interesting conjecture. For one, Mozart's transcription of Allegri's Miserere, important in that it would presumably also reflect the improvised passages performed in 1770 and thus document the style of improvisation employed by the papal choir, has never been found. The second fact is that Burney, upon returning to England near the end of 1771, published an account of his tour as well as a collection of music for the celebration of Holy Week in the Sistine Chapel. This volume included music by Palestrina, Bai, and, for the first time, Allegri's famous Miserere. Subsequently, the Miserere was reprinted many times in England, Leipzig, Paris and Rome, effectively ending the pope's monopoly on the work.

    • Actually his dad helped a little too.
    • This is an excellent example of the Final Assault on DRM. It is inconceivable for a law to be passed which requires rights management on all digital works; but it IS conceivable to have all devices enforce the rights of managed works. In other words you can't prevent a performer from giving his works away.

      So the final assault is to record the output of the amazingly high quality work from your amazingly high quality hi-fi, MP3/OGG the result, and you're away.

      This is of course where watermarking comes in - theoretically a watermark should prove that it is a managed work. But we have yet to see a scheme which actually make watermarks work, either in technology or in practice. After all, how do you tie a watermark to a person ... especially if your mass producing DVDs?

      • but it IS conceivable to have all devices enforce the rights of managed works

        Sigh. Only in the U.S. could information have rights. Besides, I thought it wanted to be free, anyway. :-)

  • by Henry V .009 ( 518000 ) on Friday March 01, 2002 @03:34AM (#3089124) Journal
    It was already obvious that this was true.
    Quick proof:
    1)A software-only DRM system attempts to make a product run in cases where it is not a copy.
    2)It makes it's decision based on information content of some kind.
    3)A copy will perfectly replicate all information content. (If it can't, then you don't need DRM.)
    4)If a copy has the same information content as the original, then the DRM cannot distinguish between the copy.
    5)Therefore DRM has no way to shut down only the copies.
    6)The only way to make DRM work is to have some sort of information that is impossible or very much harder to copy. Thus, the web-activation type scheme, although IP packets could easily be spoofed.
    7)God, I should have published this years ago, if it weren't so GODDAMN OBVIOUS!
    • Not that obvious... (Score:2, Informative)

      by Slef ( 8700 )
      Have you read the paper? What you say is clearly obvious, but that's not what the paper is about. They are not proving that you can't run a copy of a software, they talk about retrieving an encryption key hidden inside a program.
    • I'm going to have to give this another go, because moderators didn't quite catch on. Your point number (2) is where you made a mistake. The DRM can not only make its decision based on the information content but on the DRM's execution environment. If it is able to find _any_ information that is unique to a particular machine (quite easy actually), then it can enforce copy protection through public key cryptography. When the transaction that grants a user a copy of the product in question, the producer can insert a watermark including this unique information and (unforgeably) digitally sign it. The DRM can then check that the signature is correct and matches the unique identifying information. So yes, DRMs can enforce copy protection--through cryptography.

      Now, having said that, if the DRM itself is under attack, then it can be altered to not enforce signatures, or (as someone already suggested) run in a sandbox where all unique identifying information can be forged. This is a different problem.

      From what I read of the paper, it stopped short of making claims about copy protection, and basically stated that it is impossible to obfuscate a program, not that it was impossible to sign data or verify its source. So, no, it's not obvious, and you are over simplifying an erroneous proof of a claim they didn't make.
  • by Anonymous Coward
    ... if they want to use the features of "late-compiling". IL reads very easy, and there are some obfuscators around. :-)
  • Not quite (Score:4, Informative)

    by Alomex ( 148003 ) on Friday March 01, 2002 @03:43AM (#3089133) Homepage
    I read the article last year when it came out. The results are not as far reaching as they sound from a first reading of the abstract.

    They proved that not every function is obfuscatable. However for all we know, it might be that most functions are obfuscatable, which is good enough. Also the notion of obfuscation is somewhat contrived (this is because of the lack of a generally well defined notion of what de-obfuscation is, they did the best given what is a new field).

    Say, in general proving that a program terminates is impossible. Nevertheless millions of lines of code are put out every day which we are positive they terminate, as we restrict ourselves to designing programs that always do so (even though the occasional bug gets in the way).

    • I don't have a .ps reader, so all I could read was the abstract, and judging by that maybe I couldn't read the article anyhow... Can you explain a bit more of it in something resembling English, please?

      1. Can a program transform other programs so as to preserve the functionality while making the output program harder to read? Yes, because there are a wide variety of programs in common use that do just that. They're called compilers and assemblers.

      2. How resistant to reverse engineering is it possible to make an obfuscated program? Apparently being mathematicians, Barak, et. al., would probably go for absolute unbreakability, or breakable only in exponential time, while the MPAA is obviously willing to settle for quite a lot less...

      the notion of obfuscation is somewhat contrived

      I'm in considerable doubt as to what "anything one can efficiently compute given O(P), one could also efficiently compute given oracle access to P" means, but it seems not only contrived but even backwards. Translation please?

      They proved that not every function is obfuscatable. However for all we know, it might be that most functions are obfuscatable, which is good enough. True. That a mathematician can write a program specifically to break your system doesn't mean your system is useless...
  • by Mike Connell ( 81274 ) on Friday March 01, 2002 @03:43AM (#3089134) Homepage
    If you look at the abstract page, you'll see that it hasn't been updated since 1970. It took 31 years to get it accepted for a conference? Wow, that sure makes me feel better about academia ;-)
  • "Tamper Resistant" (Score:5, Insightful)

    by JohnBE ( 411964 ) on Friday March 01, 2002 @03:43AM (#3089135) Homepage Journal
    I don't want to be a pedant, but resistant doesn't mean immune in all contexts, it also means "the attempt to prevent something by action or argument" [or something to that effect - I don't have a dictionary within reach].

    So tamper resistant isn't an absolute statement and often refers to the ability to buy time. However many companies (typically the saled dept.) often refer to it as though it buys *complete* piece of mind, yet even physical bank safes are rated by time to resist cracking/breaking.

    I think this paper is good because it means that PR claims can be provided with a counter argument from a third party that provides a proof. However I think that anyone using the word tamper resistant is not an imbecile, I think that anyone who uses it in the context of tamper-proof is an imbecile. Resistant has so many contexts.
    • Damn my awful typing, 'saled' should say 'sales' and in the last paragraph 'the word tamper resistant' should say the 'the statement tamper resistant'. Cheers.
  • by CptnKirk ( 109622 ) on Friday March 01, 2002 @03:47AM (#3089139)
    Now with this proof being published, software companies now have no expectation that their software only copy protection or DRM system is secure. What does this mean?

    If I wrote a copywrited piece and then used a form of copy protection that I knew people could break (similar to what some people were doing to "encrypt" song titles on Napster a while back), do I have the right to sue them under the DMCA (and a while back the judge said no)? Maybe so, maybe not, maybe it's a grey area, maybe there are other loopholes I know nothing about. But one thing I think the courts have upheld is that legally there is no degree of separation.

    For instance of a judge rules that breaking someone's "lame encryption" does not violate the DMCA, because they knew ahead of time that a person could break it. Then adding to the complexity shouldn't change anything. If you have a proof that shows that software only DRM on an untrusted client is not secure can you or should you be able to claim damages when someone eventually exploits the hole you knew had to exist.

    Of course IANAL, and I'm sure this will not cause the DMCA to crumble, but I think it raises some questions. Similarly are you allowed to advertise that such systems baised on obfuscation are secure or should they be clearly labeled as deterants, and not iron clad security?

    • The result is not particularly surprising. In some sense, the DMCA exists precisely because people can break these schemes: where technology can't enforce the behavior, you need the power of the state to enforce the behavior.
      • You're quite right, and maybe the DMCA question isn't as debateable. How about the question of liability. Slashdot is currently discussing in another story the question of who is liable for buggy and insecure software. Take this example for instance.

        If it's decided that a company is responsible for it's security holes, can/should they be held liable for damages to a third party? For instance many labels are now using some form of DRM for their online services (PressPlay, MusicNet, Napster, etc). Since there isn't a lot of SDMI compliant hardware out there, these services are forced to use a software based DRM system on untrusted computers.

        Should BMG now be able to sue Microsoft for damages when someone figures out the obfuscation being used in Media Player 8? Is this akin to selling a service with a known unpatched security hole? I dunno, but I think it's an interesting question.

        • it's an interesting question, but ultimatly non-relevent - this technology will never be sold to the end consumer, only licensed. REAl licenses, not EULAs, that will be fully support with signatures and penalty clauses and NDAs and the whole shebang. And you can be damn sure that while there may be some sort of fine if the scheme is cracked in less than X amount of time, there will certainly be an "immunity to damages" clause.
  • by neonstz ( 79215 ) on Friday March 01, 2002 @03:47AM (#3089140) Homepage

    If a piece of software (with some kind of copy-protection) runs on a computer, it can be cracked to run without that protection. Tools such as Procdump will start the program, and after the user has clicked yes on a nag box and the program is decryptet, procdump will scan the memory and rebuild the executable.

    If a movie or music file is protected by some encryption it still has to be decrypted to be played. There are many ways to crack this. Crack the encryption, intercept the data stream after it has been decrypted or just record the analog stream. A small quality loss, but with no protection at all. I remeber reading an article by Tron Øgrim, where he had interviewed a boss in a publishing corporation or something like that about DeCSS and ways to protect digital data (movies in this case). He asked if they had some way to stop people from just using a camcorder to record the tv, and the boss-guy said no, and I had the impression that they just hadn't thought of it. They can protect their movies and music with super-strong encryption, but people still have to be able to watch the movies or listen to the music. If people can watch or listen to it, they will be able to record it.

  • by Anonymous Coward on Friday March 01, 2002 @03:49AM (#3089141)
    For some tools and practical information on reverse engineering
    The Centre for Software Maintenance" [uq.edu.au] is hard to beat.

    Of particular interest is dcc [uq.edu.au] , the GPL decompiler.
    Input ".exe" files, and output high level C code.

    • dcc != practical (Score:3, Interesting)

      by eddy ( 18759 )

      dcc isn't practical though, unless you've got a heavily modified version. The offical version is hardwired to only support very small programs, and fixing that would require extensive rewriting of its internal structures.

      Not saying that it isn't interesting, only that today, no one (I'll wager) is using dcc for practical reverse-engineering.

      There's also rec [backerstreet.com] (reverse-engineering compiler), but it's sort of limited in the kind of input it allows.

      IDA [datarescue.com] on the other hand is the tool of choice for the kind of reverse-engineering you're thinking of. If there were to be a source-generating backend on that one, you'd see a lot of worried faces, I assure you.

    • by Anonymous Coward
      Yes, although dcc won't decompile MS Word, I can see its value for a whole class of problems: device drivers. It would seem to have a lot of potential in that area.

      And yes, IDA is quite good. Having experienced the fun of IDA, I can vouch for its usefullness. It requires you to use that computer between your ears but with a bit of skill you can do wonders with it. For completeness, Sourcer [v-com.com] should be mentioned. It is quite good too, and somewhat orthogonal to IDA. However, I find myself returning to IDA, especially for the "tough" parts.

  • A corollary is that Warcraft III was doomed to be cracked, and that no matter what they do, it will be 'easy' to hack a cheat. Possibly a realization of this will lead to a different approach to game design a la Bioware: no effort is spent to stop cheaters, you just have to trust your friends.
    • Not completely correct. You can prevent cheating by not trusting the client. You could have the server do all the work and have the client just render the result. Of course, this is very inefficient. So you have to find a middle ground; trust the client as little as possible without overtaxing the server and requiring too much bandwidth.
  • by KNicolson ( 147698 ) on Friday March 01, 2002 @03:55AM (#3089150) Homepage
    And they were always very careful to point out that their software is merely tamper *resistant*, not tamper *proof*. This is not just the sales guys, but the engineers too, and even in meetings if I accidentally said, for example, "*blah* will prevent copying", they were quick to correct my mistake.
  • by geoff lane ( 93738 ) on Friday March 01, 2002 @03:58AM (#3089157)
    All programs have to be "interpreted" by something when run. Usually it's a hardware CPU but it could just be a good software emulator. If a program is running on a s/w interpreter emulating a CPU it's trivial (though lengthy) to determine the algorithms and data used by the program. It doesn't matter how hidden the code and data are, when they hit the CPU they must make sense.

    • My SOURCE codes are copy protected because they are written in Object Pascal. The unwashed masses can't crack 'em because they don't program while persons savy enough to make sense of the code will sniff that anything not in C/C++/Java/Perl/CLisp is not worth bothering to read.
  • would be the Realnetworks DVD software used by the DeCSS team.
    As many Linux DVD enthusiasts already know, DeCSS was made by looking at the binaries that the Realnetworks DVD software contained and locating the decryption key.
  • by mirko ( 198274 ) on Friday March 01, 2002 @04:05AM (#3089168) Journal
    NT is built upo an HAL (Hardware Abstraction Layer) which makes it actually seen as software so, it is obvious DRM hardware can't be 100% secure !?

    Now, if they promote brain-implants, then they might have the users DRM'ed which will be quite different to bypass...

    Unless one finds enough red pills.
    • NT is built upo an HAL (Hardware Abstraction Layer) which makes it actually seen as software so, it is obvious DRM hardware can't be 100% secure !?

      Versions 5.0 and later of NT Kernel, used in Windows 2000 and Windows XP, include support for signed device drivers. When you install a device driver, the OS tells you whether or not Microsoft Hardware Compatibility Labs has digitally signed the driver. Signed audio drivers must support a function to turn off all cleartext digital outputs, and applications can choose to output only to signed drivers. See also Secure Audio Path [pineight.com].

      However, without watermarks, Microsoft won't be able to stop D/A/D copying, and the standard SDMI watermarks have already been broken.

  • by WyldOne ( 29955 ) on Friday March 01, 2002 @04:30AM (#3089196) Homepage
    Its the reason that 40 bit encryption of no longer considered secure. And why RSa is secure with 1024 bits for now.

    When beowolf clusters came out (obligitory reference) lots of 'unbreakable' encryption was considered suspect (eg DES) Any encryption system is only secure for a limited amount of time. When new hardware/software comes out the limit is shortened.

    I remember a hardware 'key' system that plugged into the parallel port, and all the circuitry was encased in a solid block of black plastic. It was broken by sampling the data in & out then wedged itself in and emulated the hard key (software replaced hardware). The real trick is to spend a resonable amount of money to protect your data/programs for what you might get in monetary compensation. Eg don't put a $40,000 dollar lock on a $2 product.

    I think the real question is this: what are they trying to protect, and for how long? Could you guarantee that some code would get 5 yrs of time where the encryption is unbreakable? A twisty mind may think up a interesting 'unbreakable' codec, but a differently twisted mind can crack it.
    • When Beowulf clusters came out (obligitory reference) lots of 'unbreakable' encryption was considered suspect (eg DES) Any encryption system is only secure for a limited amount of time. When new hardware/software comes out the limit is shortened.

      Not so fast. Moore's law states that transistor density (and thus computer power per square foot) doubles every 18 years, and a doubling of computer power reduces effective key length by only one bit. Given that one of the world's largest clusters [distributed.net] hasn't yet cracked a 64-bit key, barring some sort of quantum breakthrough, I see a 128-bit key as potentially running into the limits of the silicon that underlies our current classical computing architecture. Do you really believe that Moore's law will hold for the next century (i.e. time for 64 doublings)?

      Eg don't put a $40,000 dollar lock on a $2 product.

      More like a $2 million product if you sell one copy to a pirate who makes 2 million copies through a peer-to-peer file sharing network.

  • by jpmorgan ( 517966 ) on Friday March 01, 2002 @04:33AM (#3089200) Homepage
    There's a paper called Protecting Mobile Agents against Malicious Hosts by Tomas Sander and Christian F. Tschudin, which demonstrates it's possible to write a program which can compute a digital signature or other various functions in such a way that it's impossible for the host to hijack the process, i.e., it's cryptographically hard to reverse engineer the program to extract the public key being used, or the function being computed (This paper has been used for various purposes, including proving that it's theoreticaly possible to write computer viruses which have signatures which are impossible to detect).

    These papers aren't contradictory, there are important differences between the results.

    Ultimately, one paper demonstrates a certain type or program (which would be usefull in implementing a DRM scheme) is impossible, the other paper demonstrates another similar type program (which would also be usefull in implementing DRM schemes) is possible (and demonstrates how to create such a program, and gives a non-trivial example).

    Is this the theoretical end of all DRM as the poster is suggesting? Not yet.
  • Given a DRM program that relies on certain inputs (encypted content, permissions etc) to produce the desired output (viewable media), one can construct another program to provide it with these inputs from another source, and divert its output elsewhere as desired.

    So Eisner really does need to outlaw Turing machiens to have his way.
  • by cube_mudd ( 562645 ) on Friday March 01, 2002 @04:45AM (#3089219)
    I attended the 2002 IPAM Crypto conference at UCLA where Steven Rudich gave a presentation on this. There is an important point that, from reading the comments thus far, is not being appreciated.

    The paper does not say that programs can't be obfuscated. What it does say, is that there can be no generalized "obfuscator" that you run your program through and voila you've got an obfuscated version. Hoever, program obfuscation is possible on a per program basis. Simply put, the more obfuscated a program is, the more difficult it might be for someone to reverse engineer it.

    The folks at cloakware [cloakware.com] have done what's supposed to be a bang up job of embedding AES keys in an obfuscated client. What that means is that you can use powerful, yet easy to compute, block ciphers with symmetric keys for "public" key cryptography. The clients will have your key embedded in the program, but in theory they won't be able to recover it. As the paper proves, Cloakware has to do the obfuscation on a program by program basis. They can't have a generalized obfuscating machine because such a machine can't exist.

    Now, while I firmly believe that perfect DRM is an impossible goal (assuming no SSSCA), good enough DRM is certainly conceivable. If CSS had been obfuscated, DeCSS might have come out much later than it did. Program obfuscation could easily be used by those want DRM. They'd have to be prepared to be in a digital arms race, but they could probably as least give those who want to crack DRM a run for their money.

    All things considered, we'd be better off if content providers were willing to trust software DRM rather than forcing all non copy-compliant hardware out of existence.
    • So, can you obfuscate an encrypted interpreter?

      I've been wanting to use Python for applications that require guarding against malicious users --yes, in effect I am looking for Locked Source, flame away.

      I am wondering if you can compile a python interpreter with an embedded public key; you could then encrypt your code with your private key and still be able to ship it to a co-lo or an untrusted client site.

      However, I cannot see how this can be generalized without the public key being extractable from the interpreter executable itself or from the code itself in RAM... Thoughts?
      • You've recreated the protection scheme of the Atari 7800. There was no need to encrypt the binaries, the console wouldn't run any ROM that wasn't signed by Atari. Since the average 14 year old of the time wasn't up to modchipping, this was an effective way to control developer access to the platform.

        You're not going to be able deny access to your code from the clients forever. As you say, the public key and therefore the code is recoverable. As a security method against script kiddies though, your idea has merit. They would have to be able to replace your public key with their public key in order excute altered code. This would have to be combined with other security methods like Tripwire or Aide to make something truly effective. I wouldn't even bother obfuscating the code or the public key; just sign your program and stipulate the use of the key enabled Python.

        BTW The Atari 7800 private key was lost long ago. 7800 emulators don't even bother to check the signatures on the ROMs. Contrary to popular belief, 7800 ROMs were not encrypted, only signed. This also means a 7800 could be chipped to allow new 7800 games to be played. Don't laugh; new titles have been created for 2600s, ColecoVisions, Vectrex, and others.
    • This is an interesting point. Just having read the abstract, it seems that the paper proves this in a way similar to the halting problem proof: I'm going to build this one thing that you can't obfuscate.

      The real question is whether or not there is a class of programs that can be obfuscated.

      I really wonder about cloakware; it seems like a kernel debugger could find the key that goes into code that looks like AES, or even profile the cache behavior of a normal AES algorithm, and try to detect the running of AES in the actual program.
  • I have to say that "the (Im)possibility of Obfuscating Programs" should be self evident particularly to anybody with a detailed knowledge of CS.

    In order function the program must be 'interpreted' in someway, since that interpreter could be an engineer, the *best* than can ever be acheived is to make that task more difficult, not impossible.

    Since openness is in the interests of all Computing Engineers we need to debunk the urban myth that it's possible.

  • by 0xA ( 71424 ) on Friday March 01, 2002 @06:04AM (#3089308)
    It is the mathematical proof of the impossibility of a software-only DRM system on an untrusted client such as a PC.

    Okay look guys, I know this, you all know this but let's not tell the suits okay.

    I like watching them fuck it up

  • by eddy ( 18759 ) on Friday March 01, 2002 @06:54AM (#3089375) Homepage Journal

    Want to know what is possible? Want something to think smile about when you hear about the latest and greatest smartcard system? Just curious about how one actually can go about rev-eng'ing a chip?

    You owe it to yourself to read the following paper: Design Principles for Tamper-Resistant Smartcard Processors [cam.ac.uk] and check out the slides [cam.ac.uk] for lots of interesting pictures.

    Everything from how you use acid to remove the packaging without destroying the chip logic itself, to the actual microprobing to extract information from the circuit.

  • It CAN be done... (Score:2, Interesting)

    by L-One-L-One ( 173461 )
    Though the work presented at crypto 2001 may prove that it's not possible to provide program obfuscation in the general case, some other researchers have shown how to do obfuscating in more restrictive, yet powerfull scenarios.
    For example, there is a paper that describes a method to do Function Hiding [nec.com]. This allows to compute a function on an untrusted host. A lot of problems can be modeled that way, and though we may never see methods to provide obfuscation in the general case, it does not rule out the possibility of obfuscating special classes of programs.
  • So what if perfect obfuscation is impossible? So is encryption short of employing one time pads or exotic quantum devices.


    The point of obfuscation, however imperfect is to drive crackers crazy to the point that they give up trying to break it. It really isn't necessary to have perfect obfuscation (even if there were such a thing). All you have to do is make the code so twisty-turny, redundant checks, weird loops, self modifying code and more that the cracker gives up exasperated.


    Let's face it, there are very few programs that good to warrant someone sitting down for weeks trying to break them. Hell, there comes a point where its simply cheaper to buy them than the time you waste trying to crack them.


    If you want to see some good tips on making software crack resistant, try here [inner-smile.com].

    • Hell, there comes a point where its simply cheaper to buy them than the time you waste trying to crack them.

      You're assuming a cracker is motivated by "monetary profit". This may be true for some crackers, or some crackers under some circumstances, but it totally ignores the much more likely reason for crackers being crackers, namely that they enjoy the challenge.

      If you look around you will see that interesting schemes attract crackers like honey does bees. Crackers hone their skills by creating "crackme's" for each other, where they show off new techniques. Days can be spent reparing a dummy executable purposedly broken by another cracker, dissecting layer after layer of encryption and obfuscation, and then reversing its core functionality into HLL -- all for the fun of it.

      A good cracker is something amazing to watch. Just like there are wannabe-hackers and a few supperior wizards, there are a _lot_ of wannbe-crackers ("Ohh! I can nooop!") and very few wizards. Some of these wizards does NOT engage in cracking for distribution.

      The point of obfuscation, however imperfect is to drive crackers crazy to the point that they give up trying to break it.

      Dedicated and passionate crackers never give up. However, the delay between release and published crack may be valuable to the obfuscator. But at the same time, if you release a product that will take crackers weeks to analyze, it's actually quite likely that some warezd00de somewhere will simply card the software, and distribute that.

      So there are the producers which want to hold off cracking as long as possible, and there are the lUser-hordes who want the cracked software as soon as possible, but in between there are a lot of crackers who, for the most part, couldn't care less about time-frames.

      New target, new protection-scheme, new puzzle.

      IANAC.

  • Dude, shut up, or else theyll move to schemes that ARE impossible to crack ;-)
  • A point that is often missed is that DRM and the DMCA are about protection, not security. Protection aims to take reasonable steps to prevent damage, and to introduce means to control damage.

    The DMCA is a backup for DRM. It does go too far, though. But the DMCA has been passed because it is clear that, in the light of the failure of existing DRM techniques, industry has been unable to resort to existing laws for damage control.

    Copyright is an asset and as such needs protection. I disagree with the term of copyright and repreated extensions, but the rationale for having it in the first place is sound.

    DRM is a first line of protection of copyright, and should prevent casual tresspass or theft. A successful DRM does not completely prevent duplication; it prevents casual duplication and should present a barrier for illegal mass duplication.

    It is widely acknowledged (even within the record industry, although not publically) that DRM cannot provide security -- it cannot prevent determined (or even eager) crackers from getting around the protections. It doesn't need to. Once the number of transgressions is limited, it is possible to resort to legal action in those cases. DRM is intended to reduce the number of transgressions to an acceptable, and managable, level.

    • When writing your congressman about the stupidity of the SSSCA, please take note of the following:

      Copyright is an asset and as such needs protection. I disagree with the term of copyright and repreated extensions, but the rationale for having it in the first place is sound.

      DRM is a first line of protection of copyright, and should prevent casual tresspass or theft. A successful DRM does not completely prevent duplication; it prevents casual duplication and should present a barrier for illegal mass duplication.


      DRM is worthless for achieving those goals. This will become evident to the Media Cartels, and even to our thickheaded congress, though it appears not before the destroy the industry most responsible for America's prosperity over the last decade.

      The software industry is even more vulnerable to copyright violation than the music and film industries ever will be. It always has been, by virtue of existing within a medium in which an infinite number of perfect copies can be made at virtually no cost. As game and other software designers learned over time, no copy protection worked, even for the length of the development/sales cycle of the product. Whatever scheme they came up with could be defeated, and no amount of laws banning such activity are going to have any effect on that since the act of copyright violation is/was already illegal and the abusers were perfectly willing to disregard the law regardless.

      What does work is product serialization, perhaps coupled with stamping a person's identity (e.g. name, ip address) on the product. No one with any sense of self preservation is willing to share a copy of a product that can be traced back to them, or traced back to their credit card.

      Yes, there are people who trade in warez. The law already has plenty of clout to deal with them. But the vast majority of people who use proprietary software pay for it and are not willing to share it with their family and friends, precisely because of the serialization and identity coupling approach I just described.

      This will work perfectly fine for movies, television, music, and any other "infinitely copyable" product you care to name, without the need for draconian laws, without the need for DRM, indeed, without the need for the DMCA.

      The software industry, whose very bread and butter are most vulnerable to copyright violation, already solved this problem without running to Uncle Sam for new, coercive, draconian, indeed, some might say Orwellian, legislation. I suggest the entertainment industry do the same, and I suggest anyone writing their congresspeople make this point very clear to them.

      The problem has been solved by the software industry. The RIAA and MPAA Do not need the SSSCA to protect their profits, period.
  • But it is possible (Score:3, Insightful)

    by Great_Geek ( 237841 ) on Friday March 01, 2002 @10:16AM (#3090070)
    First of all, let me state that my day job is CTO of Cloakware (as mentioned in the post - the leader in Tamper-Resistant SOftware, along with some other 2-bit companies :-) This is actually jumpping the gun on some announcement that we are about to make (but those will be mostly PR pieces that are of less interest to this audience).

    I like to make several points:
    - what the "(im)possibility" paper says
    - "we all know" does not mean its true
    - lots of other published works
    - resistance is not an absolute thing

    timothy has mis-understood the importance of the "(im)possibility" paper. The breakthrough is that this is the first real theoretical treatment of obfuscation. They show that it is not possible to build a totally automated system that is Really Secure (to vastly over-simplify, they construct program that actively leaks a single bit and then show that no obfuscation program can protect this program against itself). This is really interesting but not directly applicable to what we do - we work with our OEM customers to help design the system, the protocol, the programs so that all the pieces are working together; then we "cloak" the critical pieces. (I spoke to some of the authors before the conference, and many Big Names during Crypto'01; I think it is fair to say that most knowledgable people have this view).

    As to the "we all know" truism; it is clearly not true. Real life examples abound - any old, large software system is hard to fix since people don't understand the relations between modules (i.e., the market for reverse-engineering tools). These systems are Tamper-Resistant. The well know IOCC (International Obfuscated C Contest) is another good source of Tamper-Resistant programs. In a manner of speaking, the goal of Cloakware is to achieve this Tamper-Resistance on-demand, for easily maintained code.

    The "(im)possiblity" paper is breakthrough on the theory side, but many other people (including us) have published on the practical problems. Some names include Cohen, Collberg, Forest, Wang, Knight. There are many schemes that are reducible to various complexity classes, usually NP-complete and we have one that is PSPACE-hard. All of these papers are correct, there is no conflict.

    Lastly, "security" is not binary and has many different attributes. Each application has its unique requirements. For example, diplomatic files are protected for many decades or centuries; a Britney Spear song probably needs only a few months; real-time stock market quotes for 15 minutes. Factors like Usability, Speed, Deployment are often more important than raw security.
  • by Sloppy ( 14984 )

    Don't need a proof. Just look at it this way: You've want code that a computer has to be able to figure out how to execute, but a human can't? News flash: Humans are smarter than computers. By a lot.

    The day that changes, it will be big news that totally eclipses anything coming from the entertainment industry. We'll be too busy enslaving the AIs to entertain us for free instead of Hollywood (good version) or getting vaporized by Skynet-launched nukes (bad version).

  • Comment removed based on user account deletion
  • by Skim123 ( 3322 )
    My algorithms professor from last quarter, Russell Impagliazzo, worked on the paper. Hoo bah.
  • Encrypted software. (Score:2, Interesting)

    by BitterOak ( 537666 )
    I don't think it will be long before CPUs are deployed with built-in encryption units. Each CPU would have a public/private keypair with the private key sealed up forever in the chip and the public key readily available.

    Commercial software could then be encrypted.

    When you install a new piece of software your public key is read out and you type a product authorization key which is printed on a card in the box, and this is sent via the Internet to the vendor. The vendor checks that the product key hasn't been used before and then encrypts the session key for the software package with your CPU's public key. This encrypted key is sent back to you and stored in a file on your computer's hard drive. When you launch the application, the loader reads the encrypted session key into the CPU and issues a special machine instruction which causes the CPU to decrypt the session key and store it in a CPU register which can't be read out by anyone. But at this point the CPU can start reading the encrypted software code and execute it. The plaintext code is never exposed outside the CPU.

    This would not only provide perfect copy protection for software, but also allow DRM in software that can't be cracked.

    Expect this soon after the SSSCA passes. Technologically, it wouldn't be hard to implement.

    • by 3waygeek ( 58990 )
      Each CPU would have a public/private keypair with the private key sealed up forever in the chip and the public key readily available.

      This sort of exists now -- Intel Pentium III [intel.com] CPUs have a 96-bit serial number that could be used as a public key in the way you describe. However, many BIOSes allow you to disable the CPU serial number, so a post-SSSCA fix could be as simple as a new BIOS without this feature.
      • This sort of exists now -- Intel Pentium III [intel.com] CPUs have a 96-bit serial number that could be used as a public key in the way you describe.

        Not really because the encryption/decryption would not be taking place wholly within the chip, but rather would be done in software making it totally insecure against a hostile user. In my scheme, the public-key and symmetric encryption would be completely contained within the chip, and the fixed private key and software session keys would never exist outside the chip.

        What the Pentium III showed is that it is economically feasible to mass produce chips with unique numbers inside them. This would mean a unique keypair for each chip would be feasible.

"If value corrupts then absolute value corrupts absolutely."

Working...