Predicting User Behavior to Improve Security 133
CitizenC writes "New computer-monitoring software designed to second-guess the intentions of individual system users could be close to perfect at preventing security breaches, say researchers. Read more." The paper (pdf) is online as well.
hmmm... (Score:4, Insightful)
Does sound promising though.
Re:hmmm... (Score:4, Funny)
They can't fire us all.
Re:hmmm... (Score:4, Insightful)
promotions wouldn't be a problem either. you have the network have a parameter for the type of job that a user is supposed to be doing. when they get a promotion that job type will change. their new behavior will not be marked as bad until the system learns the new behavior.
of course everything i said is under the assumption that they'll be using neural networks.
Re:hmmm... (Score:1, Offtopic)
Re:hmmm... (Score:3, Insightful)
- Just my $0.02
Re:hmmm... (Score:2, Interesting)
Heck, I end up using a variety of computers through out the day as problems pop up. This would trigger an alert everytime I brought up a ssh window on an average user's computer to kill a runaway process, etc.....Full time staff is right, either that or every computer I touched would end up with quite a wide "border" of actions allowed and would defeat the purpose of the system.
Re:hmmm... (Score:1)
I could see this working statistically, like Paul Graham's spam-klling approach that was up on slashdot a few weeks ago (here [slashdot.org] and here ). Each user is gradually tracked over time, and their activities are compared to there past activities. Sudden changes would be judged to be intrusion, while very gradual changes would not be.
You could then have a "user reset" button to set them back to "zero" when they changed positions, or with a really good way to statistically describe their actions, set them back to the average value for the other users in the same position.
Re:hmmm... (Score:2)
Re:hmmm... (Score:4, Interesting)
After all, probing port looks different than fixing network problems, package manangement/installation looks different than maliciously deleting files, trying to find memory leaks looks different than trying to access another process's memory space. They all us similar commands/system resources, but it should be possibile by look at a few tens of instructions whether a user is try to be malicious or not.
These may not be the best examples but the general idea is that it should be possible to determine user's intent because the probability of a sequence of commands having both a normal and malicous role, should go quite down the more instructions the user executes.
Even false positives should be useful to admins by telling about inadvertant, i.e. acidentally typing rm -rf *,users as well.
Re:hmmm... (Score:5, Interesting)
for example: which is the malicious activity?
User A types: rm -rf *
User B types: rm -rf *
(User A was in the root dir at the time. User B was in a subdirectory of his home directory at the time.)
Okay, that's easy- just remember to track the context of where the user currently is. But then what about this?
User A types: rm -rf
User B types: rm -rf
The difference is that User A was trying to delete everyone's stuff, while User B, knowing how the permissions on the files work, was just trying to find a lazy way to delete those files that he has permissions on because he was trying to clear his own junk out of the
How does the software know the difference?
how do YOU know the difference? (Score:2)
User B types: rm -rf
The difference is that User A was trying to delete everyone's stuff, while User B, knowing how the permissions on the files work, was just trying to find a lazy way to delete those files that he has permissions on because he was trying to clear his own junk out of the
How does the software know the difference?
How do you know the difference? Nothing differs between the users issuing these commands other than their intent. This is not something a human sysadmin could know either. Given that there is no system in the world, including a human element or not, that could say who had what in mind in the scenario you describe, you are unfair in requiring this of the system in question.
So, it isn't perfect. But did you really expect it to be? Any system will necessarily have to provide a number of false positives (such as the one you described). This does not imply that it couldn't work very well overall.
Also, it could be argued that a warning really should go off even if the user had no malicious intent, as using rm -rf on other people's files because of pure lazyness is not something that should be encouraged anyhow.
Re:how do YOU know the difference? (Score:2)
Granted, the purpose of this tool is to merely let a human know that he should pay attention to the activity in question, but I don't have confidence in the capacity of corporate IT departments to be apply restraint when using such tools, sorry. When a tool is written with manual intervention in mind, eventually this will be forgotten in many large IT departments. The tool will become automated to the point where it no longer has that human hand on the brakes anymore to keep it under wraps.
One thing I do agree with, though, is that my example was not a good one. rm -rf on the shared drive isn't a good idea because Unix doesn't differentiate between "permission to write" and "permission to delete", and so you'd end up deleting files that people left open for collaborative purposes.
Re:how do YOU know the difference? (Score:2)
Nice to know that somebody actually read replies to their comments even if the story is more than 15 minutes old though.
Re:how do YOU know the difference? (Score:2)
Well, I did come across as arguing against the tool when it's really the misuse of it that frightens me, yes. I'm a bit trigger-happy on that subject because I used to work for a company that used dumb metrics.
Home Security (Score:5, Funny)
Keep doors locked at my house to prevent other people from coming in.
Re:Home Security (Score:1)
what if (Score:4, Funny)
Re:what if (Score:1)
Re:what if (Score:1)
Then you'd better be damned good enough to keep it up without being caught.
Arms Race (Score:5, Interesting)
Re:Arms Race (Score:1)
Re:Arms Race (Score:2)
Re:Arms Race (Score:1)
Re:Arms Race (Score:2)
Re:Arms Race (Score:1)
chown
Very dumb thing to do in retrospect... especially as root...
Well, um (Score:5, Insightful)
Re:Well, um (Score:5, Funny)
A user's creativeness to mess things up never ceases to amaze.
Or as one of the corollaries to Murphy's Law states: "No matter how idiot-proof you make something, an ingenious idiot in the field will find a workaround."
Re:Well, um (Score:1)
Remember that this is network security (Score:4, Insightful)
Sounds good for other people. (Score:4, Funny)
Oh wait. That wouldn't be unusual. DAMMIT!
Gee, (Score:3, Insightful)
aliasing (Score:5, Interesting)
Brandon
Re:aliasing (Score:5, Informative)
Re:aliasing (Score:5, Interesting)
Or, just do your malicious cracking using system calls from your own C programs. Don't use the rm command in a script, use a program that calls unlink().
To even have a chance of being effective, the system would have to be watching not the commands you type, but the system calls you make. (In unix terms, any time you do something using one of the functions on man page 2, the system library would have to log that.)
Re:logging man 2 system calls? (Score:2)
Credit card / phone companies... (Score:4, Interesting)
Stifle creativity (Score:5, Insightful)
Re:Stifle creativity (Score:1)
Re:Stifle creativity (Score:4, Interesting)
All the sysadmin has to do is look at the log and say, "Ah, he's just trying to figure out how to filter his email" and dismiss it, whereas trying to get acquainted with an unfamiliar system and all of its configuration files would be extremely obvious.
Not bad but... (Score:5, Interesting)
The user could "poison" the information by slowly changing his working habits. If done properly, the AI would probably think this was no different than the user just learning to do things in a different way. When the habits are close enough to the infringing behaviour, the user can probably do anything without setting off alarms.
In addition, if this is the only line of security, the user can then gradually return his patterns to normal. The logs from this system won't show anything. The PHBs may well decide that, when using something as smart as this, traditional logs won't be needed.
Re:Not bad but... (Score:4, Informative)
Re:Not bad but... (Score:4, Insightful)
Better yet, do this at the kernel level (Score:3, Interesting)
Re:Better yet, do this at the kernel level (Score:2)
Your idea would have been better without that "kills the process" part. A system that's wrong 6 percent of the time shouldn't be taking that kind of drastic measure. It should be used to alert a human being and nothing more.
Minority Report? (Score:5, Insightful)
After reading the PDF intently (skimming) (Score:2)
Bob from Accounting gets to look in the 2001 Sales figures, but Ted from Janitorial Services does not.
Names and passwords, logs and a good sysadmin sounds like it would do just fine.
Re:After reading the PDF intently (skimming) (Score:4, Informative)
You could go even further and log a typing rate jump or dip of 30 WPM.
Easily circumvented (Score:1)
Re:Easily circumvented (Score:1)
Brandon
Re:Easily circumvented (Score:1)
Re:Easily circumvented (Score:1)
There are patterns to what is normal and what is malicious in the same way that normal mail has different patterens than spam. Statistical evalution should yield high success rates.
I posted a similar comment eariler.
If implemented on slashdot ... (Score:2)
perfect? (Score:1)
That's perfect security? 94%? Heck, I balk if my uptime isn't at least 99.999%. And security must be better than uptime.
Not as crazy as it sounds (Score:3, Informative)
destined to failure (Score:2)
Although, we can be certain that they will exist, they may be so insignificant that we never can detect them.
------------------
Wish I was a Physics Genius
Re:destined to failure (Score:5, Funny)
Um, that's Godel's Theorem.
> Wish I was a Physics Genius
I think that just about sums it up. ;-)
Re:destined to failure (Score:1)
The Heisenberg uncertainty principle states that there will always be true statements within a system that cannot be proved within that system.
Um, no. That would be Goedel's Incompleteness Theorem. [miskatonic.org] Not that it's any more applicable to the issue at hand.
It can learn (Score:2, Interesting)
But can it learn to think like a crook?
Intelligent pr0n filters.. (Score:4, Informative)
..are what we need. If someone could come up with a box that could filter pages based on the amount of pink within the images I could delete 80% of my outgoing firewall rules at work!
Re:Intelligent pr0n filters.. (Score:1)
Re:Intelligent pr0n filters.. (Score:3, Interesting)
God I would love to have that guys job!
Re:Intelligent pr0n filters.. (Score:2)
Jeez. Why not just hire employees you can trust? Or was this instituted as part of a court-mandated consent agreement?
Re:Intelligent pr0n filters.. (Score:2, Insightful)
Re:Intelligent pr0n filters.. (Score:1)
Re:Intelligent pr0n filters.. (Score:1)
I assume the other 20% of rules cover the interracial and black pr0n sites?
Re:Intelligent pr0n filters.. (Score:5, Funny)
I assume the other 20% of rules cover the interracial and black pr0n sites?
They're all pink on the inside.
These guys should learn from history... (Score:3, Interesting)
I don't think so... MS software constantly second-guesses users, and decides things for them, and it's pretty much as far from 'perfect' at preventing security breaches as you can get!
These guys have never used MS word have they?
From Clippy, to the damn 'auto-correct' which always decides to turn "MHz" into "Mhz", all they need to do is install MSOFFice, and see how wrong this idea is..
Another link in the chain (Score:2, Interesting)
This should only be used to bolster existing security systems. Perhaps it could be used to correlate data gleaned from an IDS (Intrusion Detection System) to reduce the excessive noise that they usually generate.
A company would be foolish to put *any* single system like this as their only line of defense no matter what % success rate it has. Such systems are brittle and "when they fail, they fail badly."
94 percent success rate (Score:3, Interesting)
Bruce Schneier (Score:5, Interesting)
Not quite that bad... (Score:1)
Actually, that would be the chance of catching every one of 100 attacks. If you've got 100 people chipping away at your network then yes, you might miss six of 'em...then again, if you've got 100 crackers on your network, that's probably not all you've missed ;)
Obligatory (Score:5, Funny)
Would you to start the IIS hacking wizard?
Would you like to view a list of the top 1,000 exploits?
Never show this prompt again, its already too easy to hack IIS.
Good idea but doesn't solve the old problem (Score:4, Interesting)
The biggest problem in computer security, in what is related to users, is not anomalies, but the usual practice. Remember that experts say that 90% of flaws is due to insiders and not outsiders. And why? Because 99% of these insiders don't care a nail for security. Most of them keep using the wife's name for password and sharing C: to everyone. And no matter the efforts, policies, orders and instructions keep gaining dust. If you try to enforce them then you get a crowd in front of the boss with a rope for your neck.And if even the boss comes up to defend your work, everyone start to mine all your job. All they want is Internet, passing documents and hoping that you finally get out and Microsoft comes in to solve all the problems. That's what the lamers think about security. And in this mess, no matter the expert you are, no matter the tools you have, no matter the hours you loose on the net, you always get trouble every week.
Besides I noted that if someone is going for the break-in, he will mostly go from start. It starts up with this guy "playing" with the computer, then it goes up to the net. Later he thinks he's smart enough to break the server and show that the security admin is a LaMeR.And it ends up with you looking at his desktop and writing the final document to fire or put him into court. You may ask why this guy could go so far. Because he's smart, because no matter the lamerness he is good on something. So the boss will think twice before firing him. If you are in a corporation, then the boss will hang you up with this "unreplaceble" expert because in the city where he lives there's no one else to do his job. Besides, the corporation lost too much money on training him and doesn't want to start from zero on this. So you continue to see the bastard for a few monthes more before you catch him on the red spot.
I saw this and I know that this is a problem on many companies and state institutions around the world. So how this system will help you in such cases? It will, with a large margin of error as the main anomaly, the user, is there from the very start..
Re:Good idea but doesn't solve the old problem (Score:1)
Of course, I'm an optimist.
it does not bode well for those of us... (Score:1, Insightful)
Yeah, sure... (Score:1)
"I looks like you're trying to break into the system. Would you like some help?"
Useless for Joe Average (Score:2)
Somebody has to predict Joe Average's behavior and setup a profile. The computer can't do that automatically because we have no good mind reading systems.
Joe Average is not smart (and that's an understatement). He can't setup such a profile for himself. Therebefore, this method is useless for the ignorant masses.
Next Gen Clippy (Score:2)
My PC has this feature (Score:1)
Real easy to make work at most companies... (Score:2)
Nice in theory but (Score:3, Insightful)
Profiling crackers? Brilliant! (Score:4, Funny)
Expected Behaviour vs. Modified Behaviour (Score:3, Insightful)
The described system seems to base it's rules on learned user habits; obviously, this strikes one as being incredibly fallible. Assuming their 94% figure is correct for the sake of argument, how do you think *your* behaviour would change knowing full-well that you are being watched?
There are laws in certain places that say a user (in a corporate environment) must be notofied that they are being monitored at that very second. Some software places a pair of eyeballs - how creepy is that - in the toolbar when this occurs.
If the thing's purpose is to sniff out 'suspicious' behavious, I can't see how it could work properly. I mean, how can it sniff out 'motive'?
Hm (Score:4, Funny)
Ush: command not found: r00t
*meanwhile, in the Secret Command Centre*
#QUEER#COMMAND##INVESTIGATE##
$ owNz0rz machine
owNz0rz: unknown parameter machine
*SCC*
###THERE#MIGHT#BE#SOMETHING#GOING#ON##
$ owNz0rz r00t
Ush: j00 owNz0r d4 r00t!
*SCC*
#####ALARM#ALARM#####
$
Ush: Someone trying to use 'alarm()', authorize? n0
Ush: Killing alarming process.
$ 1337
Sounds like a Dreamcast (Score:1)
"Success" - "false positive" = garbage (Score:5, Insightful)
I'd be much more impressed by a claim of an 0.001% false alarm rate than I am by a 94% success rate.
Yet, on a per-line basis, if you assume that a user averages, say, three typed lines per minute, that's 180 lines per hour = 360000 lines per working year.
A
Re:"Success" - "false positive" = garbage (Score:2)
How about the other way round?
I'd be much more impressed by a claim of an 0.001% false alarm rate than I am by a 94% success rate.
Tsssk... I can get you 0% any time of the day.
so laik (Score:4, Funny)
while
Re:so laik (Score:1)
Congratulations, you've won an award [helsinki.fi]!
Re:so laik (Score:1)
awk '{print $2}'/var/log/httpd/access_log
your kung fu is better than mine, the\ way,\ what're-senpai
Changing tactics (Score:2, Insightful)
Won't work for certain categories of users (Score:3, Interesting)
I don't think the proposed system will work for every one. I think that most workers in development groups will end up getting spanked for what the system interprets as "misbehavior". A developer unit-testing pieces of an application may end up deleting large swaths of files to see how a routine responds to missing files. A developer may write a "dummy server" that just sends streams of random bytes to test how a client process responds to bad input data. Testers may have to reset dates on machines to verify leap year compliance. Testers may make a bunch of files read-only to see how an app handles a log file that has bad permissions.
These are all legit operations - I've done every single one as part of testing or unit-testing in the past. They're also all operations that might be part of a local or remote root exploit.
The Management will have to turn off the profiling for certain users to avoid periodically getting swamped with false alarms or cutting off testing during the final phases of product development.
I have to conclude that it's just more snake oil
The Microsoft Version (Score:1, Funny)
"It looks like you are trying to hack into this system. Would you like me to start the Hacking Wizard for you?"
Do The Math (Score:3, Insightful)
But, even if it is 94%, if you've got a system that runs around 100 users, then 94% equals approximately 1 million mistakes per year. Where does the budget come from to timely track down 1 million false alarms annually? How is any analyst going to seriously follow every machine-generated warning when 99.99% of the machine-generated warnings are spurious?
Let us now return to reality, which is already in progress.
when it gets scary (Score:1)
you kill clippy
5 seconds later he re-appears "I know you need to go, I know better than you, you can't avoid this"
or the inverse of this. You try to click on outlook and the OS keaps moving your mouse away from the icon, then it stops, you go to try again and it throws your pointer across the screen. You go for the start menu and before you get there the taskbar hides.
so close yet so far (Score:2)
Predicting User Behavior to Avoid A Line Of Hopeless Sales Staff Around My Desk
example (lesson) #1:
'why does it say i don't have permission to install kazaa on this machine?'
'delicate windows sytem message. very high level. just ignore it.'
why? (Score:2)
There is a sweet spot, between a free for all IT environment and network nazis. In the sweet spot, you have reasonable usage and security policies, backups, reimaging (when necessary), and best of all, something of a blind eye to the clueful.
Unfortunately, there seems to be a ratchet effect; an inevitable ossification. There's always going to be "incidents" of lost files, viruses, etc. Let's overreact, and put our users in straightjackets (but never, for example, replace Outlook with a sane mail client). Some idiot installed Kazaa, so let's make sure nobody installs vim or textpad. And the clueful people needed to run a reasonable network are too expensive; let's remotely install everything with some crap like Netware Application Launcher
And now this. We'll detect anyone trying to come up with a better way to do things, and harrass them. Great. Meanwhile, anyone with ill intent can still do whatever they want - yeah, you can theoretically restrict a user from writing to his own hard drive or registry, but good luck. What was that about cheap easy administration again?
I bet it's full of buffer overflows (Score:1)
These systems will have to be tuned to such a fine line between false positives and false negatives that it will be hard to see as viable. A few false positives too many and the CEO will be sending memos.
Solution (Score:3, Funny)
Prediction: Users cause security breaches.
Near-perfect solution: Eliminate all users.
-- Skynet, 09-29-1997, 02:14 hours
This has a slim chance (Score:1)
Then along comes a guy who just tries to pick up certain, well-known strings from the network stream and voila, a sort of virus scanner for networktraffic. Works like a charm, with low false positives. See also here [snort.org], but there are others.