Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Secure Interaction Design 120

Pingster writes "Next week, ICICS 2002 will take place in Singapore. Out of 40 papers at the conference, there will be just one paper that looks at human factors. Though many people know that usability problems can render even the strongest security useless, the security community has only recently started paying attention to usability issues. More serious thinking about usability and security is desperately needed. The paper proposes ten interaction design principles. Maybe you'll find them obvious; maybe you'll disagree with them entirely. Great! Let's have a discussion."
This discussion has been archived. No new comments can be posted.

Secure Interaction Design

Comments Filter:
  • IMHO... (Score:3, Informative)

    by unterderbrucke ( 628741 ) <unterderbrucke@yahoo.com> on Wednesday December 04, 2002 @07:25PM (#4814706)
    Poorly organized. Lynx-optimized website (with only two pages) [freewebtools.com], only two months to write papers [lwn.net], an overly broad topic, [csoonline.com] and being held in a pseudo-third world country [krdl.org.sg], away from the main countries where most research is being done, don't exactly add up to success. I'll be surprised if they register more than 500 attendees.
    • MOD PARENT UP (Score:2, Offtopic)

      by SoCalChris ( 573049 )
      Poorly organized. Lynx-optimized website (with only two pages) [freewebtools.com], only two months to write papers [lwn.net], an overly broad topic, [csoonline.com]and being held in a pseudo-third world country [krdl.org.sg], away from the main countries where most research is being done, don't exactly add up to success. I'll be surprised if they register more than 500 attendees.

      This isn't flamebait or a troll, he has good points, and provided links to back up his claims. Instead of modding him down as flaimbait, why didn't you post a reply saying why you disagree with him?
    • Re:IMHO... (Score:3, Insightful)

      by Slurpee ( 4012 )

      Poorly organized. Lynx-optimized website (with only two pages), only two months to write papers, an overly broad topic, and being held in a pseudo-third world country, away from the main countries where most research is being done, don't exactly add up to success. I'll be surprised if they register more than 500 attendees.


      Singapore isn't exactly a small backwater. I have attended several conferences there, including Broadcast Asia [broadcast-asia.com] who have something like 11,000 visitors from 42 countries. BA 2002 was totally huge, held in a area of several square kilometres. Anybody who is somebody goes to BA.

      Though Singapore isn't USA/Europe centric (though it is an ex-british colony), for people in the Australasian area, it often hosts *the* conferences you should be at. And in case you haven't noticed, Asia is the next big market.
    • silly boy. (Score:1, Troll)

      by twitter ( 104583 )
      What a flame. Is that how you got all those -1 flamebait mods? Everything offensive, where to start?

      Poorly organized. Lynx-optimized website (with only two pages)

      You would prefer Power Point slides as an invitiation? What's missing?

      only two months to write papers,

      You don't think people have papers ready? Whole books have been written on the subject.

      an overly broad topic,

      Yah, yah, security is like that.

      and being held in a pseudo-third world country away from the main countries where most research is being done

      Kiss my ass. What godforsaken little grey town are you from to brag about? Got any chip fab nearby?

      I'll be surprised if they register more than 500 attendees.

      Singapore might have more than that in it's LUG. The only problem these folks might have is that one or two of them take Windoze seriously, but that will be corrected when the presentations hit the screens and the questions bubble up and the truth is found.

  • by signine ( 92535 ) <slashdot@NosPAm.signine.org> on Wednesday December 04, 2002 @07:27PM (#4814717) Homepage
    This isn't anything new really, the security vs. usability arguement has been a problem forever, and frankly, it's not something to be addressed. The more secure something is, the more you restrict its usage, and the more you restrict it the less useful it becomes. The most secure computer is one that is off, encased in concrete, and buried where no one will ever find it. This is equally true of physical security, as anyone whose ever had to wait at a metal detector will attest. Security isn't convenient, and it probably never will be.
    • by RollingThunder ( 88952 ) on Wednesday December 04, 2002 @07:31PM (#4814744)
      Yes, but there are degrees to everything.

      I can make you have to enter in a 25 character password, changed daily. Extremely inconvenient - and really doesn't add to security, since you'll just write it down all the time.

      Finding where you can get the "biggest bang for the buck", IE: the best increase in security for the least inconvenience, is a very important thing. If we stop making security needlessly a pain in the ass, then people will stop thinking that secure=impossible to use.
      • IE: the best increase in security for the least inconvenience, is a very important thing.

        Are you advocating that Internet Explorer has a better security/inconvenience ratio than Mozilla? :)

        (This might not be very funny, but not everyone can be a good humorist)
        • by Anonymous Coward
          > (This might not be very funny, but not everyone can be a good humorist)
          > Save a tree. Eat a beaver

          Obviously not: you failed twice.
      • I can make you have to enter in a 25 character password, changed daily. Extremely inconvenient - and really doesn't add to security, since you'll just write it down all the time.

        A real world (sort of) example:

        http://www.hackles.org/cgi-bin/archives.pl?reque st =147
    • by nautical9 ( 469723 ) on Wednesday December 04, 2002 @07:35PM (#4814773) Homepage
      I respectfully disagree. Although securing things has typically made using it harder, there are certainly measure you can take to make it transparent to the user, SSL and SSH being the leading examples. Sure, they do little to secure the machines your talking to, but they virtually remove the fear that someone listening in on the conversation can see what you're doing (and as wireless tech becomes more popular, this kind of ease-of-use will be vastly more important).
    • by JoeBuck ( 7947 ) on Wednesday December 04, 2002 @07:37PM (#4814786) Homepage

      But the whole point of the paper is the opposite of what you're saying. If the security interface is hard to use, people will misuse it, leaving gaping holes in their systems.

      • This is exactly what I tell clients. "Security" that stops people from getting their jobs done will be bypassed and will be worse than useless.

        Want a good real-life example of how security gets hurt by poor usability? PGP, in early versions, made signing your own key into a separate step. The result is that even a top-flight security consultant once put a key on the keyservers that wasn't self-signed. That's a good example of the author's advice to make the defaults safe. Do that, and both usability and security go up.
    • by King of the World ( 212739 ) on Wednesday December 04, 2002 @07:50PM (#4814875) Journal
      Read the article. It's not "vs.". If a system trying to be secure gets in the users way too much the users will rebel and find ways around it (writing down passwords on post-it notes) and so you're not actually more secure.

      Saying that security isn't convinient glosses over the details, and when you examine security in practice there are a lot of things you can do to increase security and ease people's access.

      eg. Rather than 40 character passwords use swipe cards (yes, the card could be stolen, but then at 40 characters length the password would probably be written down somewhere and that bit of paper could be stolen too -- being the point entirely).

    • This isn't anything new really, the security vs. usability arguement has been a problem forever, and frankly, it's not something to be addressed.

      The fact that this has been a problem "forever" is exactly why it needs to be addressed.

      You should read the paper, or at least take a short look at it, it provides some very interesting ideas. It's true that you'll never have 100% security unless you turn off your computer, but this doesn't mean that security and how it is presented to the user cannot be improved.

    • by Jim McCoy ( 3961 ) on Wednesday December 04, 2002 @08:40PM (#4815188) Homepage
      This isn't anything new really, the security vs. usability arguement has been a problem forever, and frankly, it's not something to be addressed.


      What a crock. You obviuosly have never really done much secure system work. Security and usability are only in contention when people who only understand one side of the argument start dealing with people who only understand the other side of the problem. It is possible to have secure systems that do not place a significantly larger usage burden on the user if they are designed correctly, and Ping is one of the few people out there who I know has been thinking about this for more than fifteen minutes. This is not about security being convenient, it is about meeting security requirements without going the extreme that you suggest and making the useless system. Sometimes this requires that you add a bit of additional effort on the part of the user, but often it means that you actually use the UI to let the user know that an action they are about to perform has security implications that might not be obvious to a casual user.


      There is an old, probably apocryphal story about how someone ran a test on a bunch of users that presented them with a bunch of modal dialog windows in the midst of a task and one of the windows asked the user if they wanted to reformat the disk. When the users get bored or frustrated with poor UI design they will often switch into auto-pilot and in this case they blindly hit the "yes" button because that was the proper response to all of the other modal dialogs that had been interrupting their work. When the users complained the person running the test pointed out that the system asked them if they wanted to reformat the disk and they had said yes.


      Security and UI should never be considered independant items in system design, because if you can't communicate what is happening and the consequences of actions to users then the only security policy possible is the brain-dead ones that you suggest.

      • You obviously missed my point, I didn't ever say that the two concepts were mutually exclusive, but rather inversly proportional. If you have a system which is impossible to attack from the internet, it's not connected to it. If you have a password length of 24 characters and someone tries to crack it using a brute-force algorhithm, it's going to take a substantially longer time. If you want your files to be secure you shouldn't share them over the network, but it is nonetheless convienent to do so in many situations.

        Yes, you can design software, systems, and even networks that are both secure and usable, hell, that's what I do for a living. It isn't, however, a "huge issue" that should be "deeply researched." It's something that EVERYONE KNOWS. A locked door is more inconvienent and less useful than an unlocked one, even if you have the key. What if you lose the key? What if you need to get in in a hurry?

        This is the same problem of the security vs. usability paradigm. It's a simple concept that people like you are wasting too much time, effort, and emotion on. I'm not exactly sure how pointing out that the most secure system is the least accessible one makes me brain dead, because it's obvious. Instead of accusing people of not thinking first, maybe you should heed your own advice.
    • I don't think it's fair to say it becomes less useful, the more secure system just requires that a user adopt a new set of operating rules that correspond with that system. If your system requires a password of at least 8 chars, chances are that the average user will pick some easy to remember word with some special significance to them, or something which they believe is pretty random but really isn't. If the password has to have a combination of upper case, lower case, numeric, and symbol chars (i.e. %), then it makes your user think of a way of remembering the password. Just because this means writing it down doesn't mean it's that bad to write down new passwords, it's just when you start writing them down all over the place that this becomes a problem.

      Now the task becomes mandatory user education that not only tells them the rules of the new password, but also explains WHY to them, and advises them to only write it down once and keep it someplace on their person (like their wallet). The rules are more complicated, but after the user has been educated properly, they will learn these rules and thus be able to use the system more securely.

      The convenience of security precautions isn't the issue here, IMHO, the convenience of educating new users properly, and giving them timely reminders, is what makes security such a gordian knot. But it is do-able, and anyone who works anywhere with a Sexual harassment policy knows this: You are given extensive eduaction when you start, and then get annual reminders of the policy. If people treated security of their infrastructure as the human problem that it actually is, and not as the technologgical one that they want it to be, then I would be willing to bet you would see huge increases in the quality of computer security in the work place.

      This will of course never happen because it is inconvenient. As long as managers continue to think of security as a tech problem, then the problem will never get solved. The time has come for managers to stop thinking of the construction of a secure network as a tech issue solvable by automation and expensive software, and start thinking of it as a human issue that requires education and practical training. In this way you will be better able to build a network that is both secure and usable.
  • by isaacwith2as ( 543482 ) <isaacwith2as&hotmail,com> on Wednesday December 04, 2002 @07:28PM (#4814726)
    and other confusing concepts and they'll quickly go into Dummy mode and do whatever you tell them to. For this reason we should make it all more complex, so that those who understand will have an easier time controlling those who don't.
  • As it says, most people will work with something that is natural for them. But most computers users are used to current interfaces and ways of working, so will this mean just adapting the new security to existing ways of doing things?
  • 1 in 40 seems fair (Score:2, Insightful)

    by JJAnon ( 180699 )
    Out of 40 papers at the conference, there will be just one paper that looks at human factors

    There is a reason for this: the conference is about security. A large part of security is defining news ways of keeping systems secure, and usability is just one of the myriad factors. I think having even this one paper is overkill - I read it and don't think that it contributes very much.

    Before you mod me down as flamebait, read the paper and honestly ask yourself if it tells you anything new at all.
    • by Salubri ( 618957 )
      I disagree for one small point... If you make things too difficult, people start to write things down. The person that would need to write stuff down is not going to think about a secure place to hide this information, they're going to want it right by thier computer. Anyone who can get to that computer then can also get by the security, making it useless. Even if it's a very non-informative talk that takes 5 minutes, as long as it reminds developers of this small point it's done it's job.
    • by orange7 ( 237458 ) on Wednesday December 04, 2002 @08:20PM (#4815075)
      Doesn't Grandma deserve security too?

      I mean, you should in turn ask yourself why it is that, with all the impressive mathematical/technical progress made at such conferences, probably less than 0.1% of all computer users actively use such technology to protect their privacy. A company's sysadmin is probably using ssh at least, but the CEO's secretary isn't using *anything*. Even amongst a technophile audience such as slashdot readers, who has the patience to consistently use PGP?

      Last time I looked at the proceedings of a security conference, half of them seemed to be on watermarking, and what a complete waste of time it looks like that's been. With 20/20 hindsight they'd have been far better off concentrating on usability. Unfortunately the security field mostly attracts math geeks who just don't see such things as important.

      What's the point of having the most secure lock in the world if no one uses it, or most people can't figure out how to lock it?

      Judged from a purely consumer, average man/woman/and their collective dogs point of view, the security field has been a utter failure. (There are many other points of view from which it's been quite successful, of course.)

      A.
    • by El ( 94934 ) on Wednesday December 04, 2002 @08:56PM (#4815264)
      Historically, in the vast majority of security compromises have been acheived though "human engineering", e.g. calling somebody up and asking them for their password, while in very few cases the technological measures have actually failed. So it appears the human factors DOES require a lot more attention.
      • I totally agree... speaking from experience, you'd be amazed at what can be accomplished ie with "dumpster diving". I'm not talking about small accounts' either; more like department-wide accounts at companies and universities.

        Let's not forget that the whole idea of computing and/or networked computing was invented by humans, for humans. Does this not mean that it is subject to human failings?
  • by Slurpee ( 4012 ) on Wednesday December 04, 2002 @07:37PM (#4814783) Homepage Journal
    The lack of ease of use security systems are often their greatest security flaw. Good security often make themselves hard to use, and thus undermine their own security. IE
    - 10 character passwords, non-dictionary words, alpha-numeric. Safe, but can't remember them. So you write it on a post it note.
    - Multiple levels of security. This means multiple usernames and passwords. This means the user keeps a list of them in their palm pilot/wallet.
    - Secure systems invite back-doors (same as leaving a key under your door-mat...stupid, but very useful if you lock your keys inside).

    Some companies base their security around no-one knowing anything about it. Microsoft is trying to do great things with UI the ease of use, but in doing so they destroy security.

    If you do *not* have an easy to use high-security system, people *won't* use it! And if they don't use it, it is totally useless. People will always pick ease of use over security. They will pick IE and OE because things "just work", they will write their passwords on post-it notes on their screens, cause they can't remember them, they will leave keys under doormats.
    • - 10 character passwords, non-dictionary words, alpha-numeric. Safe, but can't remember them. So you write it on a post it note.

      not really...my technique is to use easy-to-remember phrases, only you convert applicable letters to numbers 1337-style.
    • Let's face it. Most IT people think better security is locking you out of your account if you mistype your password, and forcing you to change your password frequently. Works fine for the kind of person who always pays with exact change.

      A.
    • Passwords (Score:1, Informative)

      by Anonymous Coward
      Jef Raskin [jefraskin.com], in his book "The Humane Interface" provides an answer to the username/password problem.

      Firstly, no username. People know their own name better than any other word. Trying to give them another one is an exercise in futility. Usernames are frequently very easily guessable, and if all the system's passwords are unique, unnecessary.

      Passwords should be system assigned, firstly to ensure uniqueness, and secondly to make damn sure that they are from an appropriately large set of possibilities. This particular set, which is quite easy for people to remember but incredibly large is the combination of 3 randomly selected nouns. For example BeachballTruckWaterpipe

      The set of possiblities is vast. almost certainly larger than the set of all 8 character alpha-numeric strings, for example. It's not hard for a person to memorise something like this, so they won't have to write it down.
      • Re:Passwords (Score:2, Interesting)

        by Slurpee ( 4012 )
        Firstly, no username. People know their own name better than any other word. Trying to give them another one is an exercise in futility. Usernames are frequently very easily guessable, and if all the system's passwords are unique, unnecessary.

        Passwords should be system assigned, firstly to ensure uniqueness, and secondly to make damn sure that they are from an appropriately large set of possibilities.


        Sounds good...but if someone can't remember a username based upon their own name, how can they be expected to remember a system assigned password?

    • You say:

      Some companies base their security around no-one knowing anything about it. Microsoft is trying to do great things with UI the ease of use, but in doing so they destroy security. You also use the term "IE" about ten times.

      Worse, the 10 Priciples linked above are demonstrated with Windoze! Ah, what a waste. The trade off between ease of use and security is fictional. Best practices, such as unprivalidged user accounts, are just as easy to implement as an email account that automaticaly executes code sent by strangers. Sadly, many people think that M$ is the path of least resistance and that if only it can be secured, the world will be better off.

      The problem of security in M$ apps is the distribution method - closed source. This can not be fixed and so Windoze can not be made secure. Don't take my word for it. Allow me to quote the this excellent summary from Sonnenreich and Yate's "Building Linux and OpenBSD Firewalls", John Wiley and Sons, 1999. It was true then, M$ has admitted most of it by now, and it will be true tomorrow because the model is the same:

      The Windows platform was never built with security in mind; security was added as an afterthought. There are fundamental weakenesses in the operating system that can't be cured with a bandage. For example, background processes can run in a was such that they cannot be detected. ... Part of the problem stems from the fact that the vast majority of Windows packages are closed source. This means that only the original developers can identify and fix bugs in the software. Most Windows-based software products that aren't compiled directly from GNU licensed Unix source code are security disasters.

      To make matters worse, companies confuse issues by claiming security though strong encryption. While the data in transit might be secure, that has nothing to do with wheter the program is secure on a fundamental level. For example, often encryption keys and passwords are stored on disks so users don't have to type them in every time. Regardless of how complex the scheme, the orignial information can always be recovered if the password is stored on disk and can be recovered by the program without user input.

      Many windows applications are developed using common development libraries, most of which come from Microsoft. Hackers have repeatedly found insecurities such as buffer overflow conditions in these libraries. It becomes trivail to detaermine if an application has been built using such a library. ... These libraries have provided hackers with a reliable and consistent set of exploits.

      Laplinck, NetMeeting, PC Anywhere and other remote session control systems for Windows all rely on proprietary unpublished transport mechanisms to create an aura of security. While you may be unable to determine what these programs are doing with your data, rest assured that hackers know exactly how the data are being transported... Since nobody can see the source conde, and since only the vendor can created and distribute fixes, these programs are a hacker's dream.

      Add to this the fact that M$ charges an arm and a leg for said development libraries while refusing to be responsible for them and you start to understand why there's a new IE hole every month and how those holes can be persistent and how binary "patches" can introduce just as many problems as they fix.

      This flawed distribution model makes Windows the path of highest resistance for securing PC desktops. Microsoft's entire business model of crushing competition and devowering the profit for any and all innovation brought to the masses depends on it and they are unwilling to change. Their "Shared Source" initiative, MSDN and all do fail to share the libraries at fault. Microsoft steadfastly refused to share those libraries throughout their conviction and "remedy" for anti-competitive practices. Time and time again, they have used the power of their postion for abuse of their users instead of following better practices and I have little faith they will do any beter in the future. Follow M$ at your perril.

      • Yes, God forbid they demonstrate security models on Windows, the operating system used on 90% of the computers on the planet. Lord knows we wouldn't want THEM to be secure ...
  • by burgburgburg ( 574866 ) <splisken06NO@SPAMemail.com> on Wednesday December 04, 2002 @07:40PM (#4814803)
    and I disagree with them entirely.

    I actually do disagree with the first: making the path of least resistance the most secure oft leaves the non-obvious approaches open to exploitation.

    • As opposed to letting the path of least resistance be open to exploitation?
      In a perfect world all paths to a result would have the same measure of security, without an option for anything less secure.
    • by j7953 ( 457666 ) on Wednesday December 04, 2002 @08:47PM (#4815216)
      I actually do disagree with the first: making the path of least resistance the most secure oft leaves the non-obvious approaches open to exploitation.

      Have you actually read the paper? If you have only read the ten one-sentence principles, you might have misinterpreted that one. The authors do not advocate offering an alternative, non-natural way of doing things that is insecure. In fact, that statement is not even about offering multiple ways to achieve the same task (e.g. "menu item or keyboard shortcut," or "dialog or wizard"). The idea is simply that using the system securely should be easier (i.e. less resistance) than using the system in an insecure way. In other words, whenever you're about to do something that is not secure, you'll face resistance, so taking the path of least resistance will be most secure.

      I think a huge part the principle could be more simply described as "secure by default," which I hope everyone will agree with. Another important goal mentioned in the paper is "to keep the user's motivations and the security goals aligned with each other," i.e. you want to make sure that while working with your software, the user will never think about granting certain permissions simply because that would be more convinient.

  • by CySurflex ( 564206 ) on Wednesday December 04, 2002 @07:44PM (#4814827)
    I already communicated to my sysadmin that my top security usability concern is that the post-it note with my password on my monitor peels off after about two months. We need better adhesives on our post-it notes.
    • PGP Corporation will soon be releasing military-grade gaffer's tape to address this concern.
    • Passphrase. Pull a book, preferably an older one, off the shelf next to you and highlight an interesting sentence to memorize. Now use the first or some other number letter of each word in that sentance as your password. Phrases are easy to remember, yours is published, and sitting right next to you.

      I know your post-it notes were a joke, but I just had to pass along what seems to be a good practice. There are so many bad practices out there already. Pet names, silly web based password generators, which leave themselves in chache, kid names, so easy to guese.

      • Here's what I do (as root) for passwords:

        #!/bin/sh
        word=`head -c 7 /dev/random | uuencode -m -`;
        echo $word | grep [:alnum:] | cut -b 21-29

        There's probably better ways to do it, though; if anyone here is an sh wizard, then by all means please reply, and I thank you in advance for your ideas.
  • The most natural way to do any task should also be the most secure way.

    I am very confused by that suggested principle. The most natural way to delete a file is to "rm file.txt". So that should be the most secure way to delete the file? What if I delete the file via my file manager? Should that be less secure?

    I'm sure there are more details buried in the PDF files on their site... but the short summary of that principle sure confuses me.

    • He is saying make the easy way the secure way. When I send an email, the most natural way to do it is to click the send button. If there is a menu option hidden three menus deep to encrypt and send, it will not be used. Therefore, the send button should do the more secure function.

  • personally (Score:1, Interesting)

    by Anonymous Coward
    I think user centered design is weak. That's what they do in Information Science, Psychology, and a little in the MIS field. What has it brought us? Not much. Not to flame, but concentrate on the system first, thats where you can do most of the changes the fastest. Users are fickle, and the time spent on user design, is time wasted that could be sent on securing the system.
    • And when you finally have a theoretically secure system but where the interaction design is intrusive enough that people abuse or don't use the security features you've so carefully designed, what have you gained?

      User centered design is about making sure that all the time you spend perfecting your system design has any value to it and ends up being useful for the intended audience. Sure, from a far enough overhead view user centered design looks like stating the obvious, this is true for many fields. In reality there is ample evidence that it isn't, because man, do some interfaces suck for the intended audience.

      Only by explicitly stating what everyone thought was obvious can we move beyond it. This is in a sense the essence of science.
  • ...but, i find it really annoying that I havent found a program(s) that you can simply run on a remote system that you have access to, and it will tell you how exactly you need to configure your remote keys.

    Maybe such a program exists? maybe a more specific one for openssh exists that will even give you the exact commands to enter to ssh-keygen.

    • This comment's parent:
      ...but, i find it really annoying that I havent found a program(s) that you can simply run on a remote system that you have access to, and it will tell you how exactly you need to configure your remote keys.

      Maybe such a program exists? maybe a more specific one for openssh exists that will even give you the exact commands to enter to ssh-keygen.


      is not offtopic. its an example of a security issue that stems from lack of an application for openssh (or otheres) to address usability.
      • >> maybe a more specific one for openssh exists that will even give you the exact commands to enter to ssh-keygen

        um, man ssh-keygen?

        • Which you have to copy and paste, or upload and cat.

          Why the hell isn't there a ssh-keygen --upload? You could say ssh-keygen -t rsa --upload foo@example.com and it would create your key, stick it in the right place on your computer to be used, ssh over there, asking the password for the last time, and stick the key in the right file.

          Seriously, folks, this is rather stupid. If you're worried about where to put it on the remote system, then perhaps sshd needs an upgrade where you can hand it the key and it knows what to do with it. (After all, it has to know where it's looking.)

          Because, right now, it's three steps...generate the key, scp the key, and ssh over there and stick it at the end of the file.

  • by Salubri ( 618957 ) on Wednesday December 04, 2002 @07:58PM (#4814942) Journal
    What seems like an eternity ago myself and a friend were the admins of a beowulf cluster for a university physics department. Often times the user themselves would destroy any system security even before things like the interface was an issue. I can't tell you the number of times I'd walk into the lab where the cluster was stored only to find that someone using the machine logged into the mother node as root, left the machine sitting open to the world in an unlocked lab for 8 hour spans, taped the root password to the monitor, and then insisted that the highest priority we face be tightening up security, because they were having issues that the firewall wasn't detecting. ~growls~ At any rate... back to the debate between useability and security.

    In my opinion, it's rare that I've seen anything blend robust power with a simple user interface. Usually in order to make things "more intuitive" for the user they've stripped down a lot of the options from the user. The logic behind this was that if the user has fewer choices, there's less the user has to know or think about when configuring something. On the other side of the coin, I've seen programs that are completely customizable, but you spend three days RTFMing trying to figure out why it doesn't work only to find out that the hexidecimal error message its spitting out is because there is a hidden space where there shouldnt be or some other small syntax error in a 30 page text configuration file.

    The best ways that I've seen usability and functionality blended (which is the same as useability vs. any function such as security) have been when the simple choices were offered, but with an option right next to the choice to allow for greater customization of that specific choice.

    Anyways, I've probably ranted enough for now. Best get back to work.
  • by SPautz ( 94799 ) on Wednesday December 04, 2002 @08:01PM (#4814955) Journal
    There seems to be a lot of confusion in the comments about the concept of usability. Usability is not the same as usefulness. Usability is the extent to which the usefulness of your program is achievable by users. This applies to all users, from novice to expert. A program may have a myriad of useful features, but what's the point if nobody uses them?

    This is what the article is trying to get across: building security measures into tasks is pointless if the users don't utilize those measures. If the most natural way to do is task is insecure, then people will tend to use that insecure method. Making security quickly and naturally achievable by all users will result in more secure systems; the article is trying to set some guidelines about how to accomplish this.
  • 10 for 10's sake (Score:3, Insightful)

    by A non moose cow ( 610391 ) <slashdot@rilo.org> on Wednesday December 04, 2002 @08:12PM (#4815022) Journal
    These "10 principles" are fair as a whole, but it seems like they were more concerned with making the number of principles be 10 than they were with making the principles be unique, distinct, and useful.

    look at how similar these are:
    -- Visibility. The interface should allow the user to easily review any active actors and authority relationships that would affect security-relevant decisions.
    --Identifiability. The interface should enforce that distinct objects and distinct actions have unspoofably identifiable and distinguishable representations.


    It would be very difficult to follow one of these without following the other by default. Then when you get to the principle of Expressiveness you get a two-fer.

    Most of the principles overlap so badly into each other that I'm not sure how they decided to draw the lines.

    I guess they were going for some "magic number" that would feel powerful... like the 10 commandmants. I would be embarassed to have my name associated with that list.
  • by knowbody ( 183026 ) on Wednesday December 04, 2002 @08:29PM (#4815124) Journal

    The principles seem primarily oriented toward interactive use of a security product and helping to explain how the product works in a GUI sort of way. I can easily see how they would be applied to explaining trust in a PGP situation.

    However, most users (depends on the audience of course) aren't interested in security being interactive. They want it to be transparent just like all the other nuts and bolts. The only principle that really captures that idea is "path of least resistance".

    I think a good example of the maximum interactivity of a security system might be the military's encrypted telephone lines. Press the "encrypt on" button, call, say "this is a secured line" and start talking. (I haven't actually used these systems, so if that is a misunderstanding, please say so).

    I think security could easily be made more "under the hood". Look at the whole DRM thing...pretty under the hood. Imagine a system like that that was written to secure the end-user not the manufacturer.

    • by Jim McCoy ( 3961 ) on Wednesday December 04, 2002 @08:51PM (#4815239) Homepage
      Why this work does apply almost entirely to GUI issues this is because the GUI is the tool through which 99.99% of the world uses a computer. For related work that shows some better examples by the same author I would suggest that you take a look at this paper [berkeley.edu] (sorry for citing it Ping...) which provides some nice examples of how a GUI that explains the security implications of certain preference settings can be used for a mp3 player, etc. This paper is writen from the capability-semantics perspective, so the standard unix security model is already outclassed, but it will give you a better idea of how security and UI are related.
    • However, most users (depends on the audience of course) aren't interested in security being interactive. They want it to be transparent just like all the other nuts and bolts. The only principle that really captures that idea is "path of least resistance".

      No, you're wrong. The interaction is already taking place, the paper is about how to add security to those ALREADY EXISTING interactions!

      I think that the problem that goes unstated is that /usually/ adding security makes things more difficult for the user. Security people like to add whiz-bang features that say HEY you're secure! Wow! Isn't that great! Click here, do this, type in that, blah blah blah.

      Because, sure, you want to know you're secure, right? Yeah, of course you need to know when you are, and when you are not.

      So making it totally transparent is not some kind of simple answer. You always need the This Is Secure light or button or whatever like you point out.

      simon


    • I think a good example of the maximum interactivity of a security system might be the military's encrypted telephone lines. Press the "encrypt on" button, call, say "this is a secured line" and start talking. (I haven't actually used these systems, so if that is a misunderstanding, please say so).


      Yes and no. Using a STU-III (my sole experience - I have no idea if this system is current) is fairly simple. Enabling the crypto feature is as simple as turning a key-like device. So its almost like hitting a button. But that entirely ignores the issue of key management (the key-like device is rather like civilian smartcards).

      It might be worth noting that this makes a rather interesting example. Conversations (and data - the STU-III can handle FAX and MODEM traffic too) are only protected when the encryption mechanism is engaged. Otherwise, its a standard phone. Which makes sense since most phone devices are not STU-IIIs. But it also relies on the end-user(s) for security.

      The preceived weakness of the STU-III is not its encryption. The issue is that STU-III units are identified and heavily monitored for intelligence during unencrypted conversations before (or after ) encrypted communications. Information is gathered before users decide to engange the STU-III's encryption mechanism.


      I think security could easily be made more "under the hood". Look at the whole DRM thing...pretty under the hood. Imagine a system like that that was written to secure the end-user not the manufacturer.


      And that's a laudable goal. However, keep in mind that invisiblity is no holy grail either.

      CCS was very "under the hood". Yet a mistake in one vendor's implementation has made the system all but worthless.

      Microsoft's code signing is also another good "under the hood" example. But mistakes in issuing certificates as well as recent problems with faulty activex controls show that this system also has issues.
  • by El ( 94934 ) on Wednesday December 04, 2002 @08:51PM (#4815242)
    The seem to have forgotten at least one principle: The user must NOT be an idiot.
  • better components... (Score:3, Interesting)

    by kevin lyda ( 4803 ) on Wednesday December 04, 2002 @09:17PM (#4815360) Homepage
    one thing that needs to be done is better components. i have a gpg key pair and an ssh key pair. i should really have just one. or at least have them on the same keyring. wouldn't it be nice if you could set up sshd to only allow key verified connections and only keys that had been signed by one of the admins?

    gpg really needs to be made into a library, with the commandline as one user of that library. it would be easier to integrate gpg into mail/irc/web clients if you could make well defined api calls. something akin to ssh-agent might be nice for gpg. it would be nice if you could easily encrypt/sign irc or email traffic with your gpg keys. mutt does a pretty good job, but there are usability holes. it would also be nice if you could do things like sign /. posts.
    • oh, forgot the main point. the plus side to better components is that it would allow others - those with more interest in the usability side instead of the crypto/math side - to experiment with ways of making security more usable. if gpg were available as a well documented library, other developers could use it more easily - just as they already use gtk, glib, libc, and even openssl.

      actually openssl is a good example, think of how many places that has been added (my redhat system has 37 packages requiring libssl.so.2 - 48 require libcrypto.so.2). a large number of things use gpg already on my box - mutt and rpm spring to mind. but it could be used for lots of other things... nautilus; link it into qt/gtk for use in file load/save dialogs. anything that loads executable files from the net (plugins for xchat, mozilla, the gimp).

      given a libgpg, i'm sure many projects could do cool shit(tm).
  • by Anonymous Coward

    www.cgisecurity.com/lib [cgisecurity.com]
  • by Beryllium Sphere(tm) ( 193358 ) on Wednesday December 04, 2002 @11:09PM (#4815936) Journal
    The article depends heavily on distinguishing data ("objects") from code ("actors").

    The distinction between an "actor" and an "object" can only be clear in a primitive system.

    Get sophisticated, and the difference between code and data starts to blur.

    Is your database report template an object, or an actor? It's both.

    The blurring of the code/data distinction shows up in a lot of recent exploits. ILoveYou.VBS is an actor that looks like an object. That one was avoidable, but what about a GET or POST? They're data which causes code to run on the web server. From another point of view their content is a short web server program.

    Sure, you can dream about analyzing and controlling the capabilities of data-which-causes-actions. But if you ever let it achieve Turing equivalence you can't even guarantee whether it halts.
    • >>The distinction between an "actor" and an "object" can only be clear in a primitive system.

      Not primitive; well designed.

      After all, isn't that the point? If you don't know what is code, you can't tell what the code is doing and you can't tell if the code is secure.

      I'll grant that security is not always easy. Users are always trying to put code where only data should be. If you ask for a filename in a BASH script then they'll enter "$(rm -rf /)". If you ask them for their username in a web page some crazy punk will enter "' and where sex = 'F' and state='petrified" to try find all the petrified ladies...

      But that's the tricky bit: Making sure that all the data stays data...

  • It puzzles me that so many people have posted comments describing one or another common usability-related security problem that the paper discusses, but without even mentioning the solutions the paper offers.

    Twitter says:

    Best practices, such as unprivalidged [sic] user accounts, are just as easy to implement as an email account that automaticaly [sic] executes code sent by strangers.

    On Microsoft Windows or Linux, any code you run executes with your privileges by default. Running any code without, say, the privilege to send further email requires great effort with, say, Subterfugue [sourceforge.net].

    Twitter continues:

    The problem of security in M$ [sic] apps is the distribution method - closed source. This can not be fixed and so Windoze [sic] can not be made secure.

    Licensing Microsoft Windows freely would not solve its security problems; many of them exist in Linux, too, just as the quote you inserted explains in detail.

    Beryllium Sphere says:

    The article depends heavily on distinguishing data ("objects") from code ("actors").

    The paper doesn't really depend on that distinction, at least not in the way you think. One could call a process an actor, but an executable file an object, for example.

    The distinction between an "actor" and an "object" can only be clear in a primitive system. ...

    The blurring of the code/data distinction shows up in a lot of recent exploits. ILoveYou.VBS is an actor that looks like an object. That one was avoidable, but what about a GET or POST? They're data which causes code to run on the web server. From another point of view their content is a short web server program.

    Sure, you can dream about analyzing and controlling the capabilities of data-which-causes-actions. But if you ever let it achieve Turing equivalence you can't even guarantee whether it halts.

    Well, if you run it with a ulimit on cputime, you can guarantee that it will halt in at most 10 CPU seconds. :) And if you run it in a context where it has no filesystem access, you can guarantee it won't delete any files; and if you run it in a context where it has no network access, you can guarantee it won't send email. Etc.

    A non moose cow wrote:

    look [sic] at how similar these are:

    • Visibility. The interface should allow the user to easily review any active actors and authority relationships that would affect security-relevant decisions.
    • Identifiability. The interface should enforce that distinct objects and distinct actions have unspoofably identifiable and distinguishable representations.

    It would be very difficult to follow one of these without following the other by default.

    Visibility means you can see the things that matter; identifiability means the things you can see don't look the same. They seem quite orthogonal to me.

    Someone asked, "What, you mean rm file.c should be the most secure way to delete a file?" And Anonymous Coward responded:

    If you are Bob Worker Bee, trying to discard that spreadsheet with the CEO's salary in it, then yes, the default policy on your machine should be secure (three-swipe, whatever) deletion. If you're Bob, you don't know that rm has an option for 'secure' delete, and/or that if you just rm, it's still recoverable, and so forth.

    Superb example.

    I urge everyone to read this paper, at least. Thinking about security in the terms it suggests will change the landscape of computer security, and it could lead to solutions to the most persistent problems of both computer security and usability.

  • I think that one of the best solutions for e-mail is to assign a 'user' to anyone sending a message; this 'user' will not have privileges to run code or send an e-mail or do other malicious things on the machine that the mail has been received.

    Different e-mails will have different users: for example, if I receive a letter from my wife, I would have setup her e-mail account in my computer to execute code (of course by receiving the proper password from her email so nobody impersonates her). On the other hand, any spam or other unknown e-mail would be assigned to user 'anonymous' who doesn't have any rights whatsoever other than sending data to me.

    Another solution for long passwords is for the PC to have a slot to insert a card where all my user info (including password) will be stored. Then, the PC will load the O/S and the proper account according to the information stored in the card. Of course the problem of stealing the card remains, but that is generic problem concerning all types of cards(only genetic identification would solve the identity theft problem). The card will solve the usability problem of long passwords: the user would not have to do anything else other than inserting the card.
  • I'm not a big fan of Jakob Nielson. (Especially ever since I meet him at CHI2002. Perhaps snubbed would be a better term.)

    Still he has contributed a great deal and has an Alert Box article mentioning some of these same usability considerations as related to security. The article dates back to November 2000. Like many other postings here, the article is mainly focused on password policy and use.

    The Alert Box Article [mondosearch.com]
  • User Interaction Design for Secure Systems [berkeley.edu]

    by Ka-Ping Yee [mailto]
    Computer Science Department
    University of California, Berkeley

    Abstract:
    The security of any computer system that is configured and operated by human beings critically depends on the information conveyed by the user interface, the decisions of the computer users, and the interpretation of their actions. We establish some starting points for reasoning about security from a user-centred point of view, by modelling a system in terms of actors and actions and introducing the concept of the subjective actor-ability state. We identify ten key principles for user interaction design in secure systems and give case studies to illustrate and justify each principle, describing real-world problems and possible solutions. We anticipate that this work will help guide the design and evaluation of secure systems.

  • About the fact that one single paper investigates human interaction: you have to understand how the computer science academic world works.

    Conferences (at least the quality ones) and journals publish refereed papers. This means that the papers have to be reviewed by peers who try to gauge their soundness, interest and novelty.

    Needless to say, papers about human interaction are difficult to evalute. Many of them are just lists of "pious principles". Few of them are actually backed by real-life studies (those are difficult to conduct well). You would have to take representative sample sets of users and ask them to use various systems and watch their behavior and ask their opinion.

    Also, that kind of "soft" science that verges on social studies is traditionnally ill-considered in certain academic circles, which suspects it of being lots of talk and little scientific content.

    All this makes it unsurprising that few papers get into such security conferences.

Kleeneness is next to Godelness.

Working...