Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security

Predicting User Behavior to Improve Security 133

CitizenC writes "New computer-monitoring software designed to second-guess the intentions of individual system users could be close to perfect at preventing security breaches, say researchers. Read more." The paper (pdf) is online as well.
This discussion has been archived. No new comments can be posted.

Predicting User Behavior to Improve Security

Comments Filter:
  • hmmm... (Score:4, Insightful)

    by Britissippi ( 565742 ) on Friday October 11, 2002 @02:37PM (#4434288) Journal
    Sounds great in theory, however, what happens when users change roles, get promoted, demoted..... and what they have to do with their terminal changes as a result. You'd have to have a staff working full time at any average sized company making the system changes to keep this thing from triggering constant alerts.

    Does sound promising though.

    • Re:hmmm... (Score:4, Funny)

      by Angry White Guy ( 521337 ) <CaptainBurly[AT]goodbadmovies.com> on Friday October 11, 2002 @02:38PM (#4434300)
      Hell, even without promotions, added staff, etc. Everyone in my office acts irrationally enough to screw the system up completely in an hour or so.
      They can't fire us all.
      • Re:hmmm... (Score:4, Insightful)

        by bmwm3nut ( 556681 ) on Friday October 11, 2002 @02:57PM (#4434427)
        i don't think they mentioned the method in the article. but i can imagine using something like a neural network to learn the users' behaviors. from my limited work with nerual networks, i've discovered that they're really robust when they learn a problem. it's totally concievable that a neural net could learn irrational behavior too.

        promotions wouldn't be a problem either. you have the network have a parameter for the type of job that a user is supposed to be doing. when they get a promotion that job type will change. their new behavior will not be marked as bad until the system learns the new behavior.

        of course everything i said is under the assumption that they'll be using neural networks.
        • Re:hmmm... (Score:1, Offtopic)

          by alcmena ( 312085 )
          Seriously now, learn the concept of capital letters. I really don't understand how you can use correct punctuation and not use a single capital letter.
    • Re:hmmm... (Score:3, Insightful)

      by CoffeeDad ( 317394 )
      I'd guess that clearing out the learned habits of any given user, say for example when roles or responsibilities change, would be a rather routine and trivial administration task? Not unlike resetting a password or adding someone to a print queue that's not so far down the hall...

      - Just my $0.02
    • Re:hmmm... (Score:2, Interesting)

      by jimson ( 516491 )
      what happens when users change roles, get promoted, demoted......

      Heck, I end up using a variety of computers through out the day as problems pop up. This would trigger an alert everytime I brought up a ssh window on an average user's computer to kill a runaway process, etc.....Full time staff is right, either that or every computer I touched would end up with quite a wide "border" of actions allowed and would defeat the purpose of the system.
    • I could see this working statistically, like Paul Graham's spam-klling approach that was up on slashdot a few weeks ago (here [slashdot.org] and here ). Each user is gradually tracked over time, and their activities are compared to there past activities. Sudden changes would be judged to be intrusion, while very gradual changes would not be.

      You could then have a "user reset" button to set them back to "zero" when they changed positions, or with a really good way to statistically describe their actions, set them back to the average value for the other users in the same position.

      • The problem with that theory is that sudden changes in activity are NORMAL. Let's say I'm a programmer who never worked on any network code, at least not for the current employer. Then I get an assignment to make a module for some server product. Suddenly BAM I'm making all sorts of new network connections I wasn't before.
    • Re:hmmm... (Score:4, Interesting)

      by dubious9 ( 580994 ) on Friday October 11, 2002 @03:27PM (#4434517) Journal
      My guess is that it wiil take a statistical look at commands a la Bayesian Spam Plan [slashdot.org]

      After all, probing port looks different than fixing network problems, package manangement/installation looks different than maliciously deleting files, trying to find memory leaks looks different than trying to access another process's memory space. They all us similar commands/system resources, but it should be possibile by look at a few tens of instructions whether a user is try to be malicious or not.

      These may not be the best examples but the general idea is that it should be possible to determine user's intent because the probability of a sequence of commands having both a normal and malicous role, should go quite down the more instructions the user executes.

      Even false positives should be useful to admins by telling about inadvertant, i.e. acidentally typing rm -rf *,users as well.
      • Re:hmmm... (Score:5, Interesting)

        by DunbarTheInept ( 764 ) on Friday October 11, 2002 @03:39PM (#4434574) Homepage
        I have my doubts:

        for example: which is the malicious activity?
        User A types: rm -rf *
        User B types: rm -rf *

        (User A was in the root dir at the time. User B was in a subdirectory of his home directory at the time.)

        Okay, that's easy- just remember to track the context of where the user currently is. But then what about this?

        User A types: rm -rf /shared_network_drive
        User B types: rm -rf /shared_network_drive

        The difference is that User A was trying to delete everyone's stuff, while User B, knowing how the permissions on the files work, was just trying to find a lazy way to delete those files that he has permissions on because he was trying to clear his own junk out of the /shared_network_drive. He was being sloppy, but not malicious.

        How does the software know the difference?

        • User A types: rm -rf /shared_network_drive
          User B types: rm -rf /shared_network_drive

          The difference is that User A was trying to delete everyone's stuff, while User B, knowing how the permissions on the files work, was just trying to find a lazy way to delete those files that he has permissions on because he was trying to clear his own junk out of the /shared_network_drive. He was being sloppy, but not malicious.

          How does the software know the difference?


          How do you know the difference? Nothing differs between the users issuing these commands other than their intent. This is not something a human sysadmin could know either. Given that there is no system in the world, including a human element or not, that could say who had what in mind in the scenario you describe, you are unfair in requiring this of the system in question.

          So, it isn't perfect. But did you really expect it to be? Any system will necessarily have to provide a number of false positives (such as the one you described). This does not imply that it couldn't work very well overall.

          Also, it could be argued that a warning really should go off even if the user had no malicious intent, as using rm -rf on other people's files because of pure lazyness is not something that should be encouraged anyhow.
          • How does a human tell the difference? By going up to the person and talking, which, incedentally, if such a tool isn't involved is the only way the human would be alerted to the use of the command in the first place.

            Granted, the purpose of this tool is to merely let a human know that he should pay attention to the activity in question, but I don't have confidence in the capacity of corporate IT departments to be apply restraint when using such tools, sorry. When a tool is written with manual intervention in mind, eventually this will be forgotten in many large IT departments. The tool will become automated to the point where it no longer has that human hand on the brakes anymore to keep it under wraps.

            One thing I do agree with, though, is that my example was not a good one. rm -rf on the shared drive isn't a good idea because Unix doesn't differentiate between "permission to write" and "permission to delete", and so you'd end up deleting files that people left open for collaborative purposes.

            • So your complaint does not really concern the system, but rather the potential users of it. It is not at all clear that this is what you meant by your original post, if it indeed was.

              Nice to know that somebody actually read replies to their comments even if the story is more than 15 minutes old though.

              • It is not at all clear that this is what you meant by your original post, if it indeed was.

                Well, I did come across as arguing against the tool when it's really the misuse of it that frightens me, yes. I'm a bit trigger-happy on that subject because I used to work for a company that used dumb metrics.
  • by McFly69 ( 603543 ) on Friday October 11, 2002 @02:37PM (#4434291) Homepage
    -Note to Self-

    Keep doors locked at my house to prevent other people from coming in.
  • what if (Score:4, Funny)

    by Diclophis ( 203740 ) on Friday October 11, 2002 @02:38PM (#4434295) Homepage
    the first action you take breaches security?
    • Then it wouldn't be an anomaly for you to breach security thereafter. Your second attempt at breaching security would be classified as being "normal"! (I mean hackers will be hackers, hackers will hack, all perfectly normal!!)
    • >what if the first action you take breaches security?

      Then you'd better be damned good enough to keep it up without being caught.
  • Arms Race (Score:5, Interesting)

    by queh ( 538682 ) on Friday October 11, 2002 @02:38PM (#4434298)
    Surely this will just prompt crackers to stealth their actions in commands that are similar to how the system is used normally?
  • Well, um (Score:5, Insightful)

    by Roadmaster ( 96317 ) on Friday October 11, 2002 @02:39PM (#4434307) Homepage Journal
    if they had any clue about real-world users, they'd know they're absolutely unpredictable. A user's creativeness to mess things up never ceases to amaze.
  • Might be something to install to prevent me from reading /. all day long.

    Oh wait. That wouldn't be unusual. DAMMIT!

  • Gee, (Score:3, Insightful)

    by He Was Gamecubed ( 615114 ) <q_is_king@GIRAFF ... minus herbivore> on Friday October 11, 2002 @02:40PM (#4434312)
    This would work fine, with windows, you know. those 'illegal operations' have a really obvious prompt, it's easy to tell when someone is up to something.
  • aliasing (Score:5, Interesting)

    by Brandon T. ( 167891 ) on Friday October 11, 2002 @02:41PM (#4434322) Homepage
    Wouldn't it be relatively easy to get around this by aliasing shell scripts to frequently used commands? Sure, the admin might be able to find the shell scripts lying around, but if an intruder was trying to do a one-off attack, it might be viable.

    Brandon
    • Re:aliasing (Score:5, Informative)

      by halftrack ( 454203 ) <{jonkje} {at} {gmail.com}> on Friday October 11, 2002 @02:50PM (#4434380) Homepage
      I think that's untrue such a scam is not viable. The shell scripts would call commands that get registered by the system and plain alias will only affect the user, the system still sees the original command.
      • Re:aliasing (Score:5, Interesting)

        by DunbarTheInept ( 764 ) on Friday October 11, 2002 @03:46PM (#4434611) Homepage
        But what about making new programs to imitate existing ones, but just in a way that isn't noticed by the snooper? (for example: myFuzzySlipperProgram could be a renamed "rm" program compiled from source.)

        Or, just do your malicious cracking using system calls from your own C programs. Don't use the rm command in a script, use a program that calls unlink().

        To even have a chance of being effective, the system would have to be watching not the commands you type, but the system calls you make. (In unix terms, any time you do something using one of the functions on man page 2, the system library would have to log that.)

  • by monadicIO ( 602882 ) on Friday October 11, 2002 @02:42PM (#4434326)
    How is the system used by credit card and phone companies different than the one proposed by this paper?
  • Stifle creativity (Score:5, Insightful)

    by nut ( 19435 ) on Friday October 11, 2002 @02:42PM (#4434328)
    This would encourage users not to experiment and find new ways of doing tasks, if everytime you tried something new a sysad came round to ask you what you were doing.
    • It's already being done. Ever seen Mr. Paperclip in M$ Office?
    • Re:Stifle creativity (Score:4, Interesting)

      by Damion ( 13279 ) on Friday October 11, 2002 @02:51PM (#4434389) Journal
      Keep in mind that the sysadmin can see quite well what the user is doing. The point of this is just to raise a flag if someone does something outside of their daily pattern, not to mark them for inquisition.
      All the sysadmin has to do is look at the log and say, "Ah, he's just trying to figure out how to filter his email" and dismiss it, whereas trying to get acquainted with an unfamiliar system and all of its configuration files would be extremely obvious.
  • Not bad but... (Score:5, Interesting)

    by aridhol ( 112307 ) <ka_lac@hotmail.com> on Friday October 11, 2002 @02:43PM (#4434333) Homepage Journal
    At first glance, this looks like something that may be useful. However, what happens if a user knows about the system and its patterns, and plans out the attack over a large period of time?

    The user could "poison" the information by slowly changing his working habits. If done properly, the AI would probably think this was no different than the user just learning to do things in a different way. When the habits are close enough to the infringing behaviour, the user can probably do anything without setting off alarms.

    In addition, if this is the only line of security, the user can then gradually return his patterns to normal. The logs from this system won't show anything. The PHBs may well decide that, when using something as smart as this, traditional logs won't be needed.

  • by pcraven ( 191172 ) <paul&cravenfamily,com> on Friday October 11, 2002 @02:43PM (#4434335) Homepage
    See CylantSecure [cylant.com]. Run your apps for a while and have it learn your apps typical behavior. Then when something unusual happens it kills off the process. Interesting concept.
    • That sounds like a user support nightmare. Software would keep dying whever a user did something for the first time with an app that he didn't do before with that app. For your idea to work, during those runs of the app where it is learning "typical behaviour" it's going to have to execute every possible line of code in the program. That means every IF body, every ELSE body, every SWITCH case, every subroutine, etc. No way is that going to work. (For example, the user runs Netscape over and over as a browser. Then for the first time he runs it's e-mail client. The security system sees all sorts of new system calls not normally associated with that app, and so it kills it.

      Your idea would have been better without that "kills the process" part. A system that's wrong 6 percent of the time shouldn't be taking that kind of drastic measure. It should be used to alert a human being and nothing more.

  • Minority Report? (Score:5, Insightful)

    by zoward ( 188110 ) <email.me.at.zoward.at.gmail.com> on Friday October 11, 2002 @02:44PM (#4434339) Homepage
    And how long will it be before users start losing privileges for things that they "potentially might do" (with a 94% accuracy rate). About one in 20 of us is really going to suffer for this one.
  • Isn't this just setting user permissions?

    Bob from Accounting gets to look in the 2001 Sales figures, but Ted from Janitorial Services does not.

    Names and passwords, logs and a good sysadmin sounds like it would do just fine.

  • Just make a script that periodically calls every command in /usr/bin /sbin/ and /bin periodically. This could be done from a chroot jail so nothing actually got deleted. The detection program wouldn't know what was out of the ordinary if every command was part of what a user normally calls.
    • I'd imagine the detection program would have some sort of default 'normal behaviour' parameters, and executing everything in /usr/bin would not be part of them. Another thought I had, what about programs like emacs that can give you a shell from within the program? Would that even show up as normal commands entered on a terminal?

      Brandon
    • The act of swaning randomn processes would probably be flagged down as suspicious since real non-malicious users don't randomly use commands.

      There are patterns to what is normal and what is malicious in the same way that normal mail has different patterens than spam. Statistical evalution should yield high success rates.

      I posted a similar comment eariler.
  • ... CowboyNeal would get himself kicked.
  • > Tests simulating inside attacks indicate that the new software would be up to 94 per cent reliable once implemented.


    That's perfect security? 94%? Heck, I balk if my uptime isn't at least 99.999%. And security must be better than uptime.
  • by Damion ( 13279 ) on Friday October 11, 2002 @02:47PM (#4434361) Journal
    There are/were some people working on something like this here at CMU. They had posted up bunch of the raw data that they had collected (basically just shell histories with each command run being assigned to a number, and then plotted as number of command (for instance, the 40th command the user entered) against the number value of the command). The results were extremely regular, and in many cases, downright periodic. People are far more predictable than they would like to think.
  • The Heisenberg uncertainty principle states that there will always be true statements within a system that cannot be proved within that system. Thus, there will always be "true" security breaches because it is not possible to predict in advance what form they may take.

    Although, we can be certain that they will exist, they may be so insignificant that we never can detect them.

    ------------------
    Wish I was a Physics Genius
  • It can learn (Score:2, Interesting)

    by Subcarrier ( 262294 )
    Chinchani says the new system would continually adjust its view of normal and abnormal behaviour.

    But can it learn to think like a crook?
  • by grub ( 11606 ) <slashdot@grub.net> on Friday October 11, 2002 @02:48PM (#4434366) Homepage Journal

    ..are what we need. If someone could come up with a box that could filter pages based on the amount of pink within the images I could delete 80% of my outgoing firewall rules at work!
    • They exist, but they don't work too well. They either have too many false positives or miss too many real porn images.
    • by Anonymous Coward
      They use something like this where I work. They have a script that filters all images over XX bytes into a program that then scans for flesh tones within the images. Possible offending images are then forwarded onto an admin who checks the image out and with a few clicks can either add the site the picture came from to the block list, send a warning letter to the logged in user, or both. Does the same thing for image attachments on email.

      God I would love to have that guys job!
      • They have a script that filters all images over XX bytes into a program that then scans for flesh tones within the images.

        Jeez. Why not just hire employees you can trust? Or was this instituted as part of a court-mandated consent agreement?
    • Then, they would start to apply color filters to the pictures...
      • That wouldn't work, except if they asked the webmasters of porn sites to apply colour filters (and if they did that, they could just as well ask to have the images tarred, feathered and disguised as a random file type the netadmin has never heard of - preferably one that doesn't exist, that is). The pictures that a work-place pornsurfer gets would already have gone through the firewall when they're on his computer, and colour filters don't help at that part.
    • If someone could come up with a box that could filter pages based on the amount of pink within the images I could delete 80% of my outgoing firewall rules at work!

      I assume the other 20% of rules cover the interracial and black pr0n sites?
  • by Anonymous Coward on Friday October 11, 2002 @02:50PM (#4434381)
    computer-monitoring software designed to second-guess the intentions of individual system users could be close to perfect at preventing security breaches

    I don't think so... MS software constantly second-guesses users, and decides things for them, and it's pretty much as far from 'perfect' at preventing security breaches as you can get!

    These guys have never used MS word have they?

    From Clippy, to the damn 'auto-correct' which always decides to turn "MHz" into "Mhz", all they need to do is install MSOFFice, and see how wrong this idea is..
  • This should only be used to bolster existing security systems. Perhaps it could be used to correlate data gleaned from an IDS (Intrusion Detection System) to reduce the excessive noise that they usually generate.

    A company would be foolish to put *any* single system like this as their only line of defense no matter what % success rate it has. Such systems are brittle and "when they fail, they fail badly."

  • by He Was Gamecubed ( 615114 ) <q_is_king@GIRAFF ... minus herbivore> on Friday October 11, 2002 @02:52PM (#4434393)
    "Bruce Schneier, head of US computer security firm Counterpane, says the research is interesting but warns that a 94 percent success rate would be useless at maintaining good security on its own." Well.. 94% x 100 users on the network (.94 ^ 100) = %0.2 chance of detecting all suspicious behavior. Nice odds, i wouldn't depend on it to protect my network, though.
    • Bruce Schneier (Score:5, Interesting)

      by elb ( 49623 ) on Friday October 11, 2002 @03:27PM (#4434516)
      ...was recently featured in this article [theatlantic.com] about US security policy, and primarily on the dangers of relying too much on technolgoy. the article is great -- not super-techy, but a great explanation of technology and security policy; it makes an intimidating topic accessible to the intelligent non-tech. a couple of good points from the article:
      • "[the leading / best face recognition] software has a success rate of 99.32 percent--that is, when the software matches a passenger's face with a face on a list of terrorists, it is mistaken only 0.68 percent of the time. Assume for the moment that this claim is credible; assume, too, that good pictures of suspected terrorists are readily available. About 25 million passengers used Boston's Logan Airport in 2001. Had face-recognition software been used on 25 million faces, it would have wrongly picked out just 0.68 percent of them--but that would have been enough, given the large number of passengers, to flag as many as 170,000 innocent people as terrorists. With almost 500 false alarms a day, the face-recognition system would quickly become something to ignore."
      • "The most important element of any security measure, Schneier argues, is people, not technology--and the people need to be at the scene. Recall the German journalists who fooled the fingerprint readers and iris scanners. None of their tricks would have worked if a reasonably attentive guard had been watching. Conversely, legitimate employees with bandaged fingers or scratched corneas will never make it through security unless a guard at the scene is authorized to overrule the machinery. "
    • Actually, that would be the chance of catching every one of 100 attacks. If you've got 100 people chipping away at your network then yes, you might miss six of 'em...then again, if you've got 100 crackers on your network, that's probably not all you've missed ;)

  • Obligatory (Score:5, Funny)

    by scott1853 ( 194884 ) on Friday October 11, 2002 @02:57PM (#4434426)
    Clippy: It appears as though you are trying to hack into an IIS box.

    Would you to start the IIS hacking wizard?

    Would you like to view a list of the top 1,000 exploits?

    Never show this prompt again, its already too easy to hack IIS.

  • by Ektanoor ( 9949 ) on Friday October 11, 2002 @03:03PM (#4434445) Journal
    I took a brief look at the paper and sincerly the idea is not bad at all. However that 94% is pure hype.

    The biggest problem in computer security, in what is related to users, is not anomalies, but the usual practice. Remember that experts say that 90% of flaws is due to insiders and not outsiders. And why? Because 99% of these insiders don't care a nail for security. Most of them keep using the wife's name for password and sharing C: to everyone. And no matter the efforts, policies, orders and instructions keep gaining dust. If you try to enforce them then you get a crowd in front of the boss with a rope for your neck.And if even the boss comes up to defend your work, everyone start to mine all your job. All they want is Internet, passing documents and hoping that you finally get out and Microsoft comes in to solve all the problems. That's what the lamers think about security. And in this mess, no matter the expert you are, no matter the tools you have, no matter the hours you loose on the net, you always get trouble every week.

    Besides I noted that if someone is going for the break-in, he will mostly go from start. It starts up with this guy "playing" with the computer, then it goes up to the net. Later he thinks he's smart enough to break the server and show that the security admin is a LaMeR.And it ends up with you looking at his desktop and writing the final document to fire or put him into court. You may ask why this guy could go so far. Because he's smart, because no matter the lamerness he is good on something. So the boss will think twice before firing him. If you are in a corporation, then the boss will hang you up with this "unreplaceble" expert because in the city where he lives there's no one else to do his job. Besides, the corporation lost too much money on training him and doesn't want to start from zero on this. So you continue to see the bastard for a few monthes more before you catch him on the red spot.

    I saw this and I know that this is a problem on many companies and state institutions around the world. So how this system will help you in such cases? It will, with a large margin of error as the main anomaly, the user, is there from the very start..
    • It seems like that type of issue is what the detection system is good at preventing/detecting. Of course, I agree once you catch the guy he may not be fireable, but at least with something like this in place he will most likely be caught earlier, preventing damage.

      Of course, I'm an optimist.
  • by Anonymous Coward
    ...who learn by breaking things repeatedly, and on purpose.
  • Just like Clippy used to help me write letters.

    "I looks like you're trying to break into the system. Would you like some help?"
  • Security is a good thing, but this is only useful for corporations.
    Somebody has to predict Joe Average's behavior and setup a profile. The computer can't do that automatically because we have no good mind reading systems.
    Joe Average is not smart (and that's an understatement). He can't setup such a profile for himself. Therebefore, this method is useless for the ignorant masses.
  • "I see you're trying to write an email to somebody! Would you like me to encrypt it with an approved DRM key so that nobody but you can read it?"
  • Everytime I fricken install Nero, Roxio pops up and says that's bad and my PC craps out.
  • Just kick anyone off the network who doesn't spend 80% of their time downloading pr0n.
  • Nice in theory but (Score:3, Insightful)

    by JeanBaptiste ( 537955 ) on Friday October 11, 2002 @03:25PM (#4434507)
    The users I manage are completely unpredictable. Not to sound like a Luddite, but there is no technology that will ever predict what my users do. If there is a way to do it, it will be done. Millions of monkeys with millions of typewriters, and that is a great analogy for what I have seen...
  • by jabber01 ( 225154 ) on Friday October 11, 2002 @03:25PM (#4434510)
    Sounds like profiling terrorists. It'll work great, and everyone will feel secure and all, until somone flies a plane into their "secure" server.
  • by thatguywhoiam ( 524290 ) on Friday October 11, 2002 @03:28PM (#4434522)
    I would be interested to know just what happens when a user is merely aware that this system is running.

    The described system seems to base it's rules on learned user habits; obviously, this strikes one as being incredibly fallible. Assuming their 94% figure is correct for the sake of argument, how do you think *your* behaviour would change knowing full-well that you are being watched?

    There are laws in certain places that say a user (in a corporate environment) must be notofied that they are being monitored at that very second. Some software places a pair of eyeballs - how creepy is that - in the toolbar when this occurs.

    If the thing's purpose is to sniff out 'suspicious' behavious, I can't see how it could work properly. I mean, how can it sniff out 'motive'?

  • Hm (Score:4, Funny)

    by E_elven ( 600520 ) on Friday October 11, 2002 @03:29PM (#4434527) Journal
    $ r00t machine
    Ush: command not found: r00t

    *meanwhile, in the Secret Command Centre*
    #QUEER#COMMAND##INVESTIGATE##

    $ owNz0rz machine
    owNz0rz: unknown parameter machine

    *SCC*
    ###THERE#MIGHT#BE#SOMETHING#GOING#ON##

    $ owNz0rz r00t
    Ush: j00 owNz0r d4 r00t!

    *SCC*
    #####ALARM#ALARM#####

    $
    Ush: Someone trying to use 'alarm()', authorize? n0
    Ush: Killing alarming process.
    $ 1337
  • It's Thinking...
  • by dpbsmith ( 263124 ) on Friday October 11, 2002 @03:31PM (#4434535) Homepage
    Any time someone mentions a "success rate" without also mentioning the false positive rate, they're feeding you garbage

    I'd be much more impressed by a claim of an 0.001% false alarm rate than I am by a 94% success rate.

    Yet, on a per-line basis, if you assume that a user averages, say, three typed lines per minute, that's 180 lines per hour = 360000 lines per working year.

    A .001% false alarm rate means that an innocent worker is going to be interrupted THREE TIMES A YEAR by burly security people at the cube doorway shouting "Hands off that keyboard RIGHT NOW!"
    • Any time someone mentions a "success rate" without also mentioning the false positive rate, they're feeding you garbage

      How about the other way round?

      I'd be much more impressed by a claim of an 0.001% false alarm rate than I am by a 94% success rate.

      Tsssk... I can get you 0% any time of the day.
  • so laik (Score:4, Funny)

    by digitalsushi ( 137809 ) <slashdot@digitalsushi.com> on Friday October 11, 2002 @03:35PM (#4434556) Journal
    it'd be like...

    while :;do for IP in `cat /var/log/httpd/access_log|awk '{print $2}'`; do /usr/sbin/iptables -t filter -A INPUT -p tcp -s $IP/32 -j DROP;done;done or something like that. Yeah. I got your AI right here. I can sell you a tarball with a digitally signed dignature- I'm quite digil when it comes to being a digilante.
  • Changing tactics (Score:2, Insightful)

    by Icefyre ( 615125 )
    Any serious hacker will do their homework beforehand. This just makes one more step in the process of mapping out a target. Once you understand how the software works I'm sure it wouldn't be hard to circumvent given the time and dedication, not to mention the fact that it could potentially *open* security holes for malicious users to exploit.
  • by phsolide ( 584661 ) on Friday October 11, 2002 @03:53PM (#4434637)

    I don't think the proposed system will work for every one. I think that most workers in development groups will end up getting spanked for what the system interprets as "misbehavior". A developer unit-testing pieces of an application may end up deleting large swaths of files to see how a routine responds to missing files. A developer may write a "dummy server" that just sends streams of random bytes to test how a client process responds to bad input data. Testers may have to reset dates on machines to verify leap year compliance. Testers may make a bunch of files read-only to see how an app handles a log file that has bad permissions.

    These are all legit operations - I've done every single one as part of testing or unit-testing in the past. They're also all operations that might be part of a local or remote root exploit.

    The Management will have to turn off the profiling for certain users to avoid periodically getting swamped with false alarms or cutting off testing during the final phases of product development.

    I have to conclude that it's just more snake oil

  • Pop up a dialog:

    "It looks like you are trying to hack into this system. Would you like me to start the Hacking Wizard for you?"
  • Do The Math (Score:3, Insightful)

    by Lucas Membrane ( 524640 ) on Friday October 11, 2002 @04:01PM (#4434689)
    They claim "up to 94 percent reliable". (You get those emails that say "earn up to $300/hour stuffing envelopes at home"?). "Up to" is a weasel word, just like "arguably", eg it ain't gonna happen.

    But, even if it is 94%, if you've got a system that runs around 100 users, then 94% equals approximately 1 million mistakes per year. Where does the budget come from to timely track down 1 million false alarms annually? How is any analyst going to seriously follow every machine-generated warning when 99.99% of the machine-generated warnings are spurious?

    Let us now return to reality, which is already in progress.

  • Clippy pops up and says "I see it appears you want to go the bathroom" "would you like me to help you?"

    you kill clippy

    5 seconds later he re-appears "I know you need to go, I know better than you, you can't avoid this"

    or the inverse of this. You try to click on outlook and the OS keaps moving your mouse away from the icon, then it stops, you go to try again and it throws your pointer across the screen. You go for the start menu and before you get there the taskbar hides.
  • the corollary to this is of course, my job:

    Predicting User Behavior to Avoid A Line Of Hopeless Sales Staff Around My Desk

    example (lesson) #1:
    'why does it say i don't have permission to install kazaa on this machine?'
    'delicate windows sytem message. very high level. just ignore it.'

  • There is a sweet spot, between a free for all IT environment and network nazis. In the sweet spot, you have reasonable usage and security policies, backups, reimaging (when necessary), and best of all, something of a blind eye to the clueful.

    Unfortunately, there seems to be a ratchet effect; an inevitable ossification. There's always going to be "incidents" of lost files, viruses, etc. Let's overreact, and put our users in straightjackets (but never, for example, replace Outlook with a sane mail client). Some idiot installed Kazaa, so let's make sure nobody installs vim or textpad. And the clueful people needed to run a reasonable network are too expensive; let's remotely install everything with some crap like Netware Application Launcher

    And now this. We'll detect anyone trying to come up with a better way to do things, and harrass them. Great. Meanwhile, anyone with ill intent can still do whatever they want - yeah, you can theoretically restrict a user from writing to his own hard drive or registry, but good luck. What was that about cheap easy administration again?

  • seriously...this technique won't prevent against bad/poorly protected passwords or other issues where the 'proper' behavior is inherently insecure.
    These systems will have to be tuned to such a fine line between false positives and false negatives that it will be hard to see as viable. A few false positives too many and the CEO will be sending memos.
  • Solution (Score:3, Funny)

    by Some Dumbass... ( 192298 ) on Friday October 11, 2002 @09:46PM (#4435488)
    New computer-monitoring software designed to second-guess the intentions of individual system users could be close to perfect at preventing security breaches

    Prediction: Users cause security breaches.

    Near-perfect solution: Eliminate all users.

    -- Skynet, 09-29-1997, 02:14 hours :)
  • I don't think this has a chance. If you look at software for intrusion detection, you'll see that researchers hav put in many, MANY years of research trying to pick up 'strange' network traffic. But it just didn't work. They couldn't get it useable, no matter what smart technologies they used (neural nets, petri nets, complicated statistical methods).

    Then along comes a guy who just tries to pick up certain, well-known strings from the network stream and voila, a sort of virus scanner for networktraffic. Works like a charm, with low false positives. See also here [snort.org], but there are others.

The explanation requiring the fewest assumptions is the most likely to be correct. -- William of Occam

Working...