Secure Interaction Design 120
Pingster writes "Next week, ICICS 2002 will take place in Singapore.
Out of
40 papers at the conference,
there will be just
one paper
that looks at human factors.
Though many people know that
usability problems can render
even the strongest security useless,
the security community has only
recently started paying attention to usability issues.
More serious
thinking about usability and security
is desperately needed.
The paper proposes
ten
interaction design principles.
Maybe you'll find them obvious;
maybe you'll disagree with them entirely.
Great!
Let's have a discussion."
IMHO... (Score:3, Informative)
MOD PARENT UP (Score:2, Offtopic)
This isn't flamebait or a troll, he has good points, and provided links to back up his claims. Instead of modding him down as flaimbait, why didn't you post a reply saying why you disagree with him?
Re:IMHO... (Score:3, Insightful)
Poorly organized. Lynx-optimized website (with only two pages), only two months to write papers, an overly broad topic, and being held in a pseudo-third world country, away from the main countries where most research is being done, don't exactly add up to success. I'll be surprised if they register more than 500 attendees.
Singapore isn't exactly a small backwater. I have attended several conferences there, including Broadcast Asia [broadcast-asia.com] who have something like 11,000 visitors from 42 countries. BA 2002 was totally huge, held in a area of several square kilometres. Anybody who is somebody goes to BA.
Though Singapore isn't USA/Europe centric (though it is an ex-british colony), for people in the Australasian area, it often hosts *the* conferences you should be at. And in case you haven't noticed, Asia is the next big market.
silly boy. (Score:1, Troll)
Poorly organized. Lynx-optimized website (with only two pages)
You would prefer Power Point slides as an invitiation? What's missing?
only two months to write papers,
You don't think people have papers ready? Whole books have been written on the subject.
an overly broad topic,
Yah, yah, security is like that.
and being held in a pseudo-third world country away from the main countries where most research is being done
Kiss my ass. What godforsaken little grey town are you from to brag about? Got any chip fab nearby?
I'll be surprised if they register more than 500 attendees.
Singapore might have more than that in it's LUG. The only problem these folks might have is that one or two of them take Windoze seriously, but that will be corrected when the presentations hit the screens and the questions bubble up and the truth is found.
Security vs. Usability (Score:4, Insightful)
Re:Security vs. Usability (Score:5, Insightful)
I can make you have to enter in a 25 character password, changed daily. Extremely inconvenient - and really doesn't add to security, since you'll just write it down all the time.
Finding where you can get the "biggest bang for the buck", IE: the best increase in security for the least inconvenience, is a very important thing. If we stop making security needlessly a pain in the ass, then people will stop thinking that secure=impossible to use.
Re:Security vs. Usability (Score:2)
Are you advocating that Internet Explorer has a better security/inconvenience ratio than Mozilla?
(This might not be very funny, but not everyone can be a good humorist)
Re:Security vs. Usability (Score:1, Funny)
> Save a tree. Eat a beaver
Obviously not: you failed twice.
Re:Security vs. Usability (Score:2)
A real world (sort of) example:
http://www.hackles.org/cgi-bin/archives.pl?requ
Re:Security vs. Usability (Score:5, Insightful)
Re:Security vs. Usability (Score:4, Insightful)
But the whole point of the paper is the opposite of what you're saying. If the security interface is hard to use, people will misuse it, leaving gaping holes in their systems.
Re:Security vs. Usability (Score:2)
Want a good real-life example of how security gets hurt by poor usability? PGP, in early versions, made signing your own key into a separate step. The result is that even a top-flight security consultant once put a key on the keyservers that wasn't self-signed. That's a good example of the author's advice to make the defaults safe. Do that, and both usability and security go up.
Re:Security vs. Usability (Score:5, Insightful)
Saying that security isn't convinient glosses over the details, and when you examine security in practice there are a lot of things you can do to increase security and ease people's access.
eg. Rather than 40 character passwords use swipe cards (yes, the card could be stolen, but then at 40 characters length the password would probably be written down somewhere and that bit of paper could be stolen too -- being the point entirely).
Re:Why secure tokens rule. (Score:1)
Re:Why secure tokens rule. (Score:1)
While we all know that it would be easy to set up a system where they instantly get a new card, all that hassle will make it unlikely they lose one again. And they'll report it as soon as possibly, because they know it might be a few hours before they get one, instead of waiting until they need it again and walking in and getting a new one right away.
Re:Security vs. Usability (Score:2)
The fact that this has been a problem "forever" is exactly why it needs to be addressed.
You should read the paper, or at least take a short look at it, it provides some very interesting ideas. It's true that you'll never have 100% security unless you turn off your computer, but this doesn't mean that security and how it is presented to the user cannot be improved.
Security is useless if usability is sacrificed (Score:5, Insightful)
What a crock. You obviuosly have never really done much secure system work. Security and usability are only in contention when people who only understand one side of the argument start dealing with people who only understand the other side of the problem. It is possible to have secure systems that do not place a significantly larger usage burden on the user if they are designed correctly, and Ping is one of the few people out there who I know has been thinking about this for more than fifteen minutes. This is not about security being convenient, it is about meeting security requirements without going the extreme that you suggest and making the useless system. Sometimes this requires that you add a bit of additional effort on the part of the user, but often it means that you actually use the UI to let the user know that an action they are about to perform has security implications that might not be obvious to a casual user.
There is an old, probably apocryphal story about how someone ran a test on a bunch of users that presented them with a bunch of modal dialog windows in the midst of a task and one of the windows asked the user if they wanted to reformat the disk. When the users get bored or frustrated with poor UI design they will often switch into auto-pilot and in this case they blindly hit the "yes" button because that was the proper response to all of the other modal dialogs that had been interrupting their work. When the users complained the person running the test pointed out that the system asked them if they wanted to reformat the disk and they had said yes.
Security and UI should never be considered independant items in system design, because if you can't communicate what is happening and the consequences of actions to users then the only security policy possible is the brain-dead ones that you suggest.
Re:Security is useless if usability is sacrificed (Score:2)
Yes, you can design software, systems, and even networks that are both secure and usable, hell, that's what I do for a living. It isn't, however, a "huge issue" that should be "deeply researched." It's something that EVERYONE KNOWS. A locked door is more inconvienent and less useful than an unlocked one, even if you have the key. What if you lose the key? What if you need to get in in a hurry?
This is the same problem of the security vs. usability paradigm. It's a simple concept that people like you are wasting too much time, effort, and emotion on. I'm not exactly sure how pointing out that the most secure system is the least accessible one makes me brain dead, because it's obvious. Instead of accusing people of not thinking first, maybe you should heed your own advice.
Re:Security vs. Usability (Score:2)
Now the task becomes mandatory user education that not only tells them the rules of the new password, but also explains WHY to them, and advises them to only write it down once and keep it someplace on their person (like their wallet). The rules are more complicated, but after the user has been educated properly, they will learn these rules and thus be able to use the system more securely.
The convenience of security precautions isn't the issue here, IMHO, the convenience of educating new users properly, and giving them timely reminders, is what makes security such a gordian knot. But it is do-able, and anyone who works anywhere with a Sexual harassment policy knows this: You are given extensive eduaction when you start, and then get annual reminders of the policy. If people treated security of their infrastructure as the human problem that it actually is, and not as the technologgical one that they want it to be, then I would be willing to bet you would see huge increases in the quality of computer security in the work place.
This will of course never happen because it is inconvenient. As long as managers continue to think of security as a tech problem, then the problem will never get solved. The time has come for managers to stop thinking of the construction of a secure network as a tech issue solvable by automation and expensive software, and start thinking of it as a human issue that requires education and practical training. In this way you will be better able to build a network that is both secure and usable.
Just use the big words... (Score:5, Funny)
Path of Least Resistance (Score:2, Insightful)
1 in 40 seems fair (Score:2, Insightful)
There is a reason for this: the conference is about security. A large part of security is defining news ways of keeping systems secure, and usability is just one of the myriad factors. I think having even this one paper is overkill - I read it and don't think that it contributes very much.
Before you mod me down as flamebait, read the paper and honestly ask yourself if it tells you anything new at all.
Re:1 in 40 seems fair (Score:3, Interesting)
Re:1 in 40 seems fair (Score:4, Insightful)
I mean, you should in turn ask yourself why it is that, with all the impressive mathematical/technical progress made at such conferences, probably less than 0.1% of all computer users actively use such technology to protect their privacy. A company's sysadmin is probably using ssh at least, but the CEO's secretary isn't using *anything*. Even amongst a technophile audience such as slashdot readers, who has the patience to consistently use PGP?
Last time I looked at the proceedings of a security conference, half of them seemed to be on watermarking, and what a complete waste of time it looks like that's been. With 20/20 hindsight they'd have been far better off concentrating on usability. Unfortunately the security field mostly attracts math geeks who just don't see such things as important.
What's the point of having the most secure lock in the world if no one uses it, or most people can't figure out how to lock it?
Judged from a purely consumer, average man/woman/and their collective dogs point of view, the security field has been a utter failure. (There are many other points of view from which it's been quite successful, of course.)
A.
Re:1 in 40 seems fair (Score:5, Interesting)
Re:1 in 40 seems fair (Score:1)
Of course, I don't see the point of this. I'd simply have a big list of 'things never to do' that people had to memorize when joining a company, and 'helping' someone with security issues, especially over the phone, would be one of them. People won't understand 'Don't ever give out your password.', but they will understand 'Don't ever talk with anyone outside channels about anything on the computer.' People understand channels, and unless their boss starts trying to hack their account they'll be fine.
Re:1 in 40 seems fair (Score:1)
You tell your employees never to give out the password over the phone. If they'll allowed to do it with 'a password', then, yes, someone will find the password in the dumpster, or will have 'just started', or call up someone else and get the password, or a million different things.
The technical support people do not need userpasswords ninety-nine times out of one hundred, and when they do they can get off their ass and walk over to the person.
In fact, technical support shouldn't be calling people anyway. I'd make that rule number one: If you ever get a call from 'technical support' (that you didn't initiate by calling them earlier), hang up and immediately call them back. It doesn't matter if it's a tiny little detail, it doesn't matter if they don't want anything, it doesn't matter if they ask for you by name and know your username, and just want to know what color your desk is, hang up and call them back. It sounds extreme, but if the first words out of tech support's mouth are always 'Yeah, this is tech support, call me back.'*click*, then it will soon be a habit.
Re:1 in 40 seems fair (Score:2)
Let's not forget that the whole idea of computing and/or networked computing was invented by humans, for humans. Does this not mean that it is subject to human failings?
Security through ignorance? (Score:5, Insightful)
- 10 character passwords, non-dictionary words, alpha-numeric. Safe, but can't remember them. So you write it on a post it note.
- Multiple levels of security. This means multiple usernames and passwords. This means the user keeps a list of them in their palm pilot/wallet.
- Secure systems invite back-doors (same as leaving a key under your door-mat...stupid, but very useful if you lock your keys inside).
Some companies base their security around no-one knowing anything about it. Microsoft is trying to do great things with UI the ease of use, but in doing so they destroy security.
If you do *not* have an easy to use high-security system, people *won't* use it! And if they don't use it, it is totally useless. People will always pick ease of use over security. They will pick IE and OE because things "just work", they will write their passwords on post-it notes on their screens, cause they can't remember them, they will leave keys under doormats.
Re:Restated paper gets a +4 (Score:5, Insightful)
All he did was restate the summary of the paper, and he gets a +4.
yah, but the paper is 21 pages.
A classic example...if someone needs to read 21 pages to use a security system, they won't use it. if they can get the paper in a 3 point summary, they will use it. It proves that useability is important, possibly more so then the system itself.
Re:Restated paper gets a +4 (Score:1)
So my post was similar to the summary? so I guess we musta read the same paper.
woohoo
Re:Security through ignorance? (Score:2, Informative)
not really...my technique is to use easy-to-remember phrases, only you convert applicable letters to numbers 1337-style.
Re:Security through ignorance? (Score:1)
A.
Passwords (Score:1, Informative)
Firstly, no username. People know their own name better than any other word. Trying to give them another one is an exercise in futility. Usernames are frequently very easily guessable, and if all the system's passwords are unique, unnecessary.
Passwords should be system assigned, firstly to ensure uniqueness, and secondly to make damn sure that they are from an appropriately large set of possibilities. This particular set, which is quite easy for people to remember but incredibly large is the combination of 3 randomly selected nouns. For example BeachballTruckWaterpipe
The set of possiblities is vast. almost certainly larger than the set of all 8 character alpha-numeric strings, for example. It's not hard for a person to memorise something like this, so they won't have to write it down.
Re:Passwords (Score:2, Interesting)
Passwords should be system assigned, firstly to ensure uniqueness, and secondly to make damn sure that they are from an appropriately large set of possibilities.
Sounds good...but if someone can't remember a username based upon their own name, how can they be expected to remember a system assigned password?
Re:Passwords (Score:1)
I have also been thinking....being a bloke, we can only handle X things in our brain at the same time. Remembering a process for chaning our name to a username, as well as a password (3 objects) probabally streaches us to far.
Just remembering a password may be easier. It may mean memorising 6 passwords (PIN, home, work etc etc), but that still is much easier then 6 usernames and 6 passwords. Thats 6 things, verses 36 things (combinations).
Windoze Problems. (Score:2)
Some companies base their security around no-one knowing anything about it. Microsoft is trying to do great things with UI the ease of use, but in doing so they destroy security. You also use the term "IE" about ten times.
Worse, the 10 Priciples linked above are demonstrated with Windoze! Ah, what a waste. The trade off between ease of use and security is fictional. Best practices, such as unprivalidged user accounts, are just as easy to implement as an email account that automaticaly executes code sent by strangers. Sadly, many people think that M$ is the path of least resistance and that if only it can be secured, the world will be better off.
The problem of security in M$ apps is the distribution method - closed source. This can not be fixed and so Windoze can not be made secure. Don't take my word for it. Allow me to quote the this excellent summary from Sonnenreich and Yate's "Building Linux and OpenBSD Firewalls", John Wiley and Sons, 1999. It was true then, M$ has admitted most of it by now, and it will be true tomorrow because the model is the same:
The Windows platform was never built with security in mind; security was added as an afterthought. There are fundamental weakenesses in the operating system that can't be cured with a bandage. For example, background processes can run in a was such that they cannot be detected. ... Part of the problem stems from the fact that the vast majority of Windows packages are closed source. This means that only the original developers can identify and fix bugs in the software. Most Windows-based software products that aren't compiled directly from GNU licensed Unix source code are security disasters.
To make matters worse, companies confuse issues by claiming security though strong encryption. While the data in transit might be secure, that has nothing to do with wheter the program is secure on a fundamental level. For example, often encryption keys and passwords are stored on disks so users don't have to type them in every time. Regardless of how complex the scheme, the orignial information can always be recovered if the password is stored on disk and can be recovered by the program without user input.
Many windows applications are developed using common development libraries, most of which come from Microsoft. Hackers have repeatedly found insecurities such as buffer overflow conditions in these libraries. It becomes trivail to detaermine if an application has been built using such a library. ... These libraries have provided hackers with a reliable and consistent set of exploits.
Laplinck, NetMeeting, PC Anywhere and other remote session control systems for Windows all rely on proprietary unpublished transport mechanisms to create an aura of security. While you may be unable to determine what these programs are doing with your data, rest assured that hackers know exactly how the data are being transported... Since nobody can see the source conde, and since only the vendor can created and distribute fixes, these programs are a hacker's dream.
Add to this the fact that M$ charges an arm and a leg for said development libraries while refusing to be responsible for them and you start to understand why there's a new IE hole every month and how those holes can be persistent and how binary "patches" can introduce just as many problems as they fix.
This flawed distribution model makes Windows the path of highest resistance for securing PC desktops. Microsoft's entire business model of crushing competition and devowering the profit for any and all innovation brought to the masses depends on it and they are unwilling to change. Their "Shared Source" initiative, MSDN and all do fail to share the libraries at fault. Microsoft steadfastly refused to share those libraries throughout their conviction and "remedy" for anti-competitive practices. Time and time again, they have used the power of their postion for abuse of their users instead of following better practices and I have little faith they will do any beter in the future. Follow M$ at your perril.
Re:Windoze Problems. (Score:2)
Open sourcing something isn't a magical fairy spell which creates security. Just like it isn't one which creates less bugs or software that is better than commercial software available. A hackable version of Apache is no different than a hackable version of IIS...
While free software may not be magic in and of itself, it does garuntee that anyone can fix it. Closed source software is garunteed unfixable. It's not just a matter of time for fix, it's a matter of the form of the fix and the thouroghness of the fix. Apache is a good example, thank you. When Apache has a problem because some underlying library has a flaw, more than apache is fixed. The library is fixed and all software compiled with it will be fixed and for anyone who needs it. In the Windoze world, only one developer has the fixed library and everything made without it is broken. Broken libraries can last for years in the Windoze world because M$ does not simply share the source code, but charges money for binaries. Where those libraries end up at the average company making windows software is a mystery and often people put off buying new libraries as long as possible. In one situation, you can be sure that all developers have the best libraries available all the time. In the other, you can be sure that developers don't and can't find out.
Once people learn to give a fuck about their security ... security not being trumpeted by smelly nerds who like to PGP their grepballed tarzips before sending their SSH'ed e-mail FTPs to each other via cryptic commands would help.... easy to understand education of basic computer hygene is needed.
Your need to wash your mouth and keyboard out with soap, you flaming troll.
Re:Windoze Problems. (Score:1)
I find them obvious ... (Score:3, Insightful)
I actually do disagree with the first: making the path of least resistance the most secure oft leaves the non-obvious approaches open to exploitation.
Re:I find them obvious ... (Score:2, Insightful)
In a perfect world all paths to a result would have the same measure of security, without an option for anything less secure.
Re:I find them obvious ... (Score:5, Insightful)
Have you actually read the paper? If you have only read the ten one-sentence principles, you might have misinterpreted that one. The authors do not advocate offering an alternative, non-natural way of doing things that is insecure. In fact, that statement is not even about offering multiple ways to achieve the same task (e.g. "menu item or keyboard shortcut," or "dialog or wizard"). The idea is simply that using the system securely should be easier (i.e. less resistance) than using the system in an insecure way. In other words, whenever you're about to do something that is not secure, you'll face resistance, so taking the path of least resistance will be most secure.
I think a huge part the principle could be more simply described as "secure by default," which I hope everyone will agree with. Another important goal mentioned in the paper is "to keep the user's motivations and the security goals aligned with each other," i.e. you want to make sure that while working with your software, the user will never think about granting certain permissions simply because that would be more convinient.
My top concern (Score:5, Funny)
Re:My top concern (Score:1)
Password trick. (Score:2)
I know your post-it notes were a joke, but I just had to pass along what seems to be a good practice. There are so many bad practices out there already. Pet names, silly web based password generators, which leave themselves in chache, kid names, so easy to guese.
Re:Password trick. (Score:2)
#!/bin/sh
word=`head -c 7
echo $word | grep [:alnum:] | cut -b 21-29
There's probably better ways to do it, though; if anyone here is an sh wizard, then by all means please reply, and I thank you in advance for your ideas.
If natural; Then be most secure; Wha? (Score:2)
I am very confused by that suggested principle. The most natural way to delete a file is to "rm file.txt". So that should be the most secure way to delete the file? What if I delete the file via my file manager? Should that be less secure?
I'm sure there are more details buried in the PDF files on their site... but the short summary of that principle sure confuses me.
Re:If natural; Then be most secure; Wha? (Score:3, Interesting)
personally (Score:1, Interesting)
Oh, user interface design is obvious now is it? (Score:1)
User centered design is about making sure that all the time you spend perfecting your system design has any value to it and ends up being useful for the intended audience. Sure, from a far enough overhead view user centered design looks like stating the obvious, this is true for many fields. In reality there is ample evidence that it isn't, because man, do some interfaces suck for the intended audience.
Only by explicitly stating what everyone thought was obvious can we move beyond it. This is in a sense the essence of science.
maybe my issue really is small (Score:1, Interesting)
Maybe such a program exists? maybe a more specific one for openssh exists that will even give you the exact commands to enter to ssh-keygen.
Re:maybe my issue really is small (Score:1)
is not offtopic. its an example of a security issue that stems from lack of an application for openssh (or otheres) to address usability.
Re:maybe my issue really is small (Score:1)
um, man ssh-keygen?
Re:maybe my issue really is small (Score:1)
Why the hell isn't there a ssh-keygen --upload? You could say ssh-keygen -t rsa --upload foo@example.com and it would create your key, stick it in the right place on your computer to be used, ssh over there, asking the password for the last time, and stick the key in the right file.
Seriously, folks, this is rather stupid. If you're worried about where to put it on the remote system, then perhaps sshd needs an upgrade where you can hand it the key and it knows what to do with it. (After all, it has to know where it's looking.)
Because, right now, it's three steps...generate the key, scp the key, and ssh over there and stick it at the end of the file.
A note on usability and security from my exp... (Score:4, Interesting)
In my opinion, it's rare that I've seen anything blend robust power with a simple user interface. Usually in order to make things "more intuitive" for the user they've stripped down a lot of the options from the user. The logic behind this was that if the user has fewer choices, there's less the user has to know or think about when configuring something. On the other side of the coin, I've seen programs that are completely customizable, but you spend three days RTFMing trying to figure out why it doesn't work only to find out that the hexidecimal error message its spitting out is because there is a hidden space where there shouldnt be or some other small syntax error in a 30 page text configuration file.
The best ways that I've seen usability and functionality blended (which is the same as useability vs. any function such as security) have been when the simple choices were offered, but with an option right next to the choice to allow for greater customization of that specific choice.
Anyways, I've probably ranted enough for now. Best get back to work.
Usability vs Functionality (Score:3, Insightful)
This is what the article is trying to get across: building security measures into tasks is pointless if the users don't utilize those measures. If the most natural way to do is task is insecure, then people will tend to use that insecure method. Making security quickly and naturally achievable by all users will result in more secure systems; the article is trying to set some guidelines about how to accomplish this.
Re:Usability vs Functionality (Score:2)
The tradeoff people are thinking of is between security and power, not between security and usability.
Security and power really are each other's enemies, but power isn't the same thing as usability.
10 for 10's sake (Score:3, Insightful)
look at how similar these are:
-- Visibility. The interface should allow the user to easily review any active actors and authority relationships that would affect security-relevant decisions.
--Identifiability. The interface should enforce that distinct objects and distinct actions have unspoofably identifiable and distinguishable representations.
It would be very difficult to follow one of these without following the other by default. Then when you get to the principle of Expressiveness you get a two-fer.
Most of the principles overlap so badly into each other that I'm not sure how they decided to draw the lines.
I guess they were going for some "magic number" that would feel powerful... like the 10 commandmants. I would be embarassed to have my name associated with that list.
Ok, I'll bite. (although I'll probably regret it) (Score:3, Interesting)
1. Use logical security defaults. Users should not be burdened with security issues unless they want to be.
2. If a user chooses to look at or modify their security settings, options should be kept simple through massive abstraction, but should intuitively depict the meaning of the settings
3. All aspects of the UI should function the same way regardless of security settings.
Well, I tried to come up with more.. but that pretty much covers it. I guess I could make another list of "logical security defaults". That would be things like: If a remote entity requests a secure transaction, it should be granted without local user interaction.
The other points in the "10 principles" list really either don't apply to security or don't apply to the UI.
******
Mr. Owl, how many licks does it take to get to the center of a Tootsie-Roll Pop?
Re:Outlook exploits have been doing this for years (Score:2, Funny)
Is this like clicking on that attachment that says "I_love_you.vbs" in Outlook? Or should the computer produce some sort of audible warning on mouse-over?
In many cases I'd say it would have to involve a mouse mod that gives a 60kV shock, rather than just a beep.
principles primarily interactive (Score:4, Insightful)
The principles seem primarily oriented toward interactive use of a security product and helping to explain how the product works in a GUI sort of way. I can easily see how they would be applied to explaining trust in a PGP situation.
However, most users (depends on the audience of course) aren't interested in security being interactive. They want it to be transparent just like all the other nuts and bolts. The only principle that really captures that idea is "path of least resistance".
I think a good example of the maximum interactivity of a security system might be the military's encrypted telephone lines. Press the "encrypt on" button, call, say "this is a secured line" and start talking. (I haven't actually used these systems, so if that is a misunderstanding, please say so).
I think security could easily be made more "under the hood". Look at the whole DRM thing...pretty under the hood. Imagine a system like that that was written to secure the end-user not the manufacturer.
secure UIs apply to more than just crypto tools (Score:4, Interesting)
Re:principles primarily interactive (Score:1)
No, you're wrong. The interaction is already taking place, the paper is about how to add security to those ALREADY EXISTING interactions!
I think that the problem that goes unstated is that
Because, sure, you want to know you're secure, right? Yeah, of course you need to know when you are, and when you are not.
So making it totally transparent is not some kind of simple answer. You always need the This Is Secure light or button or whatever like you point out.
simon
Re:principles primarily interactive (Score:2)
Yes and no. Using a STU-III (my sole experience - I have no idea if this system is current) is fairly simple. Enabling the crypto feature is as simple as turning a key-like device. So its almost like hitting a button. But that entirely ignores the issue of key management (the key-like device is rather like civilian smartcards).
It might be worth noting that this makes a rather interesting example. Conversations (and data - the STU-III can handle FAX and MODEM traffic too) are only protected when the encryption mechanism is engaged. Otherwise, its a standard phone. Which makes sense since most phone devices are not STU-IIIs. But it also relies on the end-user(s) for security.
The preceived weakness of the STU-III is not its encryption. The issue is that STU-III units are identified and heavily monitored for intelligence during unencrypted conversations before (or after ) encrypted communications. Information is gathered before users decide to engange the STU-III's encryption mechanism.
And that's a laudable goal. However, keep in mind that invisiblity is no holy grail either.
CCS was very "under the hood". Yet a mistake in one vendor's implementation has made the system all but worthless.
Microsoft's code signing is also another good "under the hood" example. But mistakes in issuing certificates as well as recent problems with faulty activex controls show that this system also has issues.
Necessary but not sufficient for security (Score:5, Funny)
Re:Necessary but not sufficient for security (Score:1)
Re:Necessary but not sufficient for security (Score:1)
The seem to have forgotten at least one principle: The user must NOT be an idiot.
Sure, but the guide lines need to be implementable.
better components... (Score:3, Interesting)
gpg really needs to be made into a library, with the commandline as one user of that library. it would be easier to integrate gpg into mail/irc/web clients if you could make well defined api calls. something akin to ssh-agent might be nice for gpg. it would be nice if you could easily encrypt/sign irc or email traffic with your gpg keys. mutt does a pretty good job, but there are usability holes. it would also be nice if you could do things like sign
Re:better components... (Score:2)
actually openssl is a good example, think of how many places that has been added (my redhat system has 37 packages requiring libssl.so.2 - 48 require libcrypto.so.2). a large number of things use gpg already on my box - mutt and rpm spring to mind. but it could be used for lots of other things... nautilus; link it into qt/gtk for use in file load/save dialogs. anything that loads executable files from the net (plugins for xchat, mozilla, the gimp).
given a libgpg, i'm sure many projects could do cool shit(tm).
Papers on secure web app design (Score:1, Informative)
www.cgisecurity.com/lib [cgisecurity.com]
My problem with the article (Score:4, Insightful)
The distinction between an "actor" and an "object" can only be clear in a primitive system.
Get sophisticated, and the difference between code and data starts to blur.
Is your database report template an object, or an actor? It's both.
The blurring of the code/data distinction shows up in a lot of recent exploits. ILoveYou.VBS is an actor that looks like an object. That one was avoidable, but what about a GET or POST? They're data which causes code to run on the web server. From another point of view their content is a short web server program.
Sure, you can dream about analyzing and controlling the capabilities of data-which-causes-actions. But if you ever let it achieve Turing equivalence you can't even guarantee whether it halts.
Re:My problem with the article (Score:2)
Not primitive; well designed.
After all, isn't that the point? If you don't know what is code, you can't tell what the code is doing and you can't tell if the code is secure.
I'll grant that security is not always easy. Users are always trying to put code where only data should be. If you ask for a filename in a BASH script then they'll enter "$(rm -rf
But that's the tricky bit: Making sure that all the data stays data...
replies to various comments (Score:2, Insightful)
It puzzles me that so many people have posted comments describing one or another common usability-related security problem that the paper discusses, but without even mentioning the solutions the paper offers.
Twitter says:
On Microsoft Windows or Linux, any code you run executes with your privileges by default. Running any code without, say, the privilege to send further email requires great effort with, say, Subterfugue [sourceforge.net].
Twitter continues:
Licensing Microsoft Windows freely would not solve its security problems; many of them exist in Linux, too, just as the quote you inserted explains in detail.
Beryllium Sphere says:
The paper doesn't really depend on that distinction, at least not in the way you think. One could call a process an actor, but an executable file an object, for example.
Well, if you run it with a ulimit on cputime, you can guarantee that it will halt in at most 10 CPU seconds. :) And if you
run it in a context where it has no filesystem access, you can
guarantee it won't delete any files; and if you run it in a context
where it has no network access, you can guarantee it won't send email.
Etc.
A non moose cow wrote:
Visibility means you can see the things that matter; identifiability means the things you can see don't look the same. They seem quite orthogonal to me.
Someone asked, "What, you mean rm file.c should be the most secure way to delete a file?" And Anonymous Coward responded:
Superb example.
I urge everyone to read this paper, at least. Thinking about security in the terms it suggests will change the landscape of computer security, and it could lead to solutions to the most persistent problems of both computer security and usability.
some ideas (Score:1)
Different e-mails will have different users: for example, if I receive a letter from my wife, I would have setup her e-mail account in my computer to execute code (of course by receiving the proper password from her email so nobody impersonates her). On the other hand, any spam or other unknown e-mail would be assigned to user 'anonymous' who doesn't have any rights whatsoever other than sending data to me.
Another solution for long passwords is for the PC to have a slot to insert a card where all my user info (including password) will be stored. Then, the PC will load the O/S and the proper account according to the information stored in the card. Of course the problem of stealing the card remains, but that is generic problem concerning all types of cards(only genetic identification would solve the identity theft problem). The card will solve the usability problem of long passwords: the user would not have to do anything else other than inserting the card.
Older Articles on Human Factors and Security (Score:1)
Still he has contributed a great deal and has an Alert Box article mentioning some of these same usability considerations as related to security. The article dates back to November 2000. Like many other postings here, the article is mainly focused on password policy and use.
The Alert Box Article [mondosearch.com]
Another paper for those interested (Score:2, Insightful)
User Interaction Design for Secure Systems [berkeley.edu]
by Ka-Ping Yee [mailto]
Computer Science Department
University of California, Berkeley
Abstract:
The security of any computer system that is configured and operated by human beings critically depends on the information conveyed by the user interface, the decisions of the computer users, and the interpretation of their actions. We establish some starting points for reasoning about security from a user-centred point of view, by modelling a system in terms of actors and actions and introducing the concept of the subjective actor-ability state. We identify ten key principles for user interaction design in secure systems and give case studies to illustrate and justify each principle, describing real-world problems and possible solutions. We anticipate that this work will help guide the design and evaluation of secure systems.
academic world (Score:2)
Conferences (at least the quality ones) and journals publish refereed papers. This means that the papers have to be reviewed by peers who try to gauge their soundness, interest and novelty.
Needless to say, papers about human interaction are difficult to evalute. Many of them are just lists of "pious principles". Few of them are actually backed by real-life studies (those are difficult to conduct well). You would have to take representative sample sets of users and ask them to use various systems and watch their behavior and ask their opinion.
Also, that kind of "soft" science that verges on social studies is traditionnally ill-considered in certain academic circles, which suspects it of being lots of talk and little scientific content.
All this makes it unsurprising that few papers get into such security conferences.