Forgot your password?
typodupeerror
Programming Books Media Book Reviews IT Technology

The Humane Interface 169

Posted by timothy
from the killing-you-softly dept.
Reader Torulf contributed the below review of Jef Raskin's The Humane Interface .Though the book does not spend much time on Open Source software, it emphasizes ideas that every programmer probably ought to bear in mind -- at least if they wants hisprograms to have users. (And yes, he takes explicit exception to some UNIX truisms.)

The Human Interface: New Directions for Interface Design
author Jef Raskin
pages 233
publisher Addison-Wesley
rating 9
reviewer Torulf Jernström
ISBN 0-201-37937-6
summary A thought-provoking book on how to design user interfaces.

Introduction

The Humane Interface: New Directions for Interface Design, by Jef Raskin, the creator of the Macintosh project, provides an interesting read chock full of controversial ideas. As the book's full title suggests, Raskin focuses mainly on how things ideally should be made, rather than offering advice and recepies that can be immediately applied to problems in today's systems.

Don't Think!

The approach taken by Raskin stems from his definition of a humane interface: "An interface is humane if it is responsive to human needs and considerate of human frailties." In practice, this means that the interface he suggests is based on ergonomics and cognetics (psychology).

Basically, the idea is that we can do only one thing well at a time consciously. Most of us can walk, chew bubblegum, hold our bladder and speak (semi-intelligently) with a friend, all at the same time. This is because the only thing we are consciously doing (the only thing we are concentrating on) is the semi-intelligent babble. On the other hand most of us run into trouble when trying to study calculus at the same time we're hitting on a sexy lady (make that a handsome man for those 5% of ladies here at Slashdot, or some sort of hermaphrodite freak for those 11% with doubts about their sex).

The point with this is that the one thing we're consciously working with should, with as few interruptions as possible, be the content we are producing with the computer, not the computer itself. That is, all interaction with the system should, after the initial learning phase, become habitual or automated, just like we can walk or drive a car without ever consciously thinking about it. This way we could maximize productivity and concentrate on the content of our work.

There's Only One Way to Do it

For commands to become habitual as quickly as possible some interface-guidelines are given. First of all, all modes (differing types of responses based on context) should be eliminated. The system should always react in the same way to a command. All modes generate user errors when the user isn't paying attention to what mode the system is currently in (and the user should not have to pay attention to the systems current mode, the user should only have to pay attention to the content he or she is currently producing). An example of mode error happened to me while I was writing this review, just a few lines up: I unintentionally left overwrite on when I thought I was in insert-mode and thus overwrote a word by mistake. Of course, this was no big deal, but I had to take my mind off formulating the sentence to figure out what had happened, just for a second, but long enough to derail my thoughts, and that derailing should never happen.

Another way to speed the transition to habitual use is monotony. You should never have to think about what way to do something, there should always be one, and only one, obvious way. Offering multiple ways of doing the same thing makes the user think about the interface instead of the actual work that should be done. At the very least this slows down the process of making the interface habitual.

Unorthodox Suggestions

There are of course a lot of other suggestions in the book, some expected, some very surprising and unorthodox. The more conventional wisdom includes noting that the interface should be visible. That is one of the downsides of the otherwise efficient command line interface, i.e. you cannot see the commands at your disposal by just looking at the interface. A method called GOMS, for evaluating keystroke efficiency, and Fitts' law (the time it takes for you to hit a GUI button is a function of the distance from the cursor and the size of the button) are also among the less surprising ideas in the book.

The more unorthodox suggestions include Raskin's proposal for a universal undo/redo function, not just in the different applications but in the system itself. The same gesture would always undo, no matter what application or setting you last touched.

The really controversial idea, though, is to abandon applications altogether. There would be only the system, with its commands, and the users' content. Even file-hierarchies should vanish along with the applications, according to Raskin.

Conclusions

The Humane Interface focuses on the principle of user interfaces and the theory behind them. The ideas presented in the book generally make sense, after considering the background and argument for them presented by Raskin; even if some seem very drastic when first encountered (I can hear Slashdotters firing up their flamethrowers after reading the end of last paragraph). As mentioned before, the book does not provide many ready to use recipes. It does provide a good insight into the background of user interfaces, which the reader can apply to the project at hand.

Some related ideas were dicussed about a year ago on slashdot. The Anti-Mac paper discussed then, came to pretty much the opposite conclusions from the ones that Raskin presents (Raskin makes a case against beginner/expert interface levels). After reading both sides of the story, I'm inclined to believe more in Raskin's reasoning.

The only Open Source or Free Software the book mentions is Emacs, in a discussion about string searches. (The incremental model in Emacs is preferable to systems where the search does not begin until you click the search button.) I do, however, believe that the alternative interface models could be an opportunity to the open source community and that Raskin's ideas perhaps are more likely to be implemented and tested in such an environment, compared to the possibility that Microsoft would greatly simplify the thousands of commands that MS Office consists of. I therefore warmly recommend this book to anyone doing software development and I would love to see some of the ideas used in KDE or GNOME.


You can purchase this book at Fatbrain.

This discussion has been archived. No new comments can be posted.

The Humane Interface

Comments Filter:
  • by Anonymous Coward
    In case they started typing before the screen was ready, we captured the keystrokes which all popped into the cursor location at once when the screen went live.

    This happens with the Find command on the Mac. When you call it up it takes a few seconds to load. But while you are calling it up you can type into your keyboard and your keystrokes are captured and seamlessly placed into the Find window's text box. This is one of the most convenient things I know of - not having to wait to state your query. Of course, if it didn't have to draw the damn Sherlock GUI this wouldn't be necessary... but it's been around since all application calling up was slow (back in system 7.5). A very nice feature in this context.....
  • by Anonymous Coward
    A few quotes from a page on his site. [jefraskin.com]
    Perhaps the most important point is that most of what we do with computers are basically text-based applications.
    GUI interfaces are becoming VERY, VERY "cognitively intensive
    If people weren't good at finding tiny things in long lists, the Wall Street Journal would have gone out of business years ago. Would you rather have the stocks listed fifteen to a page, each page decorated like a modern GUI screen

    Here's the kind of idea that would break Gnome, or KDE away from the GUI pack (again, from JR's site):

    So we stored an image of the screen on disk so that when you turned on the Cat, that image promptly came up, which took less than a second. I knew from published experimental results that a person typically takes at least eight seconds to re-engage their thinking when coming into a new environment (e.g. moving from talking to a co-worker to using their computer). People stare at the screen for a few seconds, oblivious to time passing, regaining context. By the time they started working, the Cat screen was real (the only visual change was that the cursor started blinking).

    In case they started typing before the screen was ready, we captured the keystrokes which all popped into the cursor location at once when the screen went live. In practice (and we did a lot of user observation to find this out) this almost never happened. When it did it did not unduly upset users. In any case, it was a LOT better than having to wait a minute or two as with PCs and Macs.

    It would be possible, on today's computers, to quickly load a small routine to capture and display keystrokes so that you could sort-of get started or at least capture an idea while the rest of the system drifted in. Then you could paste what you've done where it belongs. But nobody seems to care. More's the pity.

  • by Anonymous Coward on Thursday May 17, 2001 @08:42AM (#216466)
    I think something that a lot of people miss is the fact that the average computer user is very different from the average Slashdot reader or Linux user.

    A lot of users (my mom, for example) use their computer for three things: reading email, surfing the web and word processing (Word). Now tell me why any of these three tasks require a user to think about "a file". Tell me why any of these three tasks require the user to think about "C:\My Documents\WorkStuff" (or "/home/foo/docs/workStuff", if you prefer).

    These concepts are all hold-overs from an era where the people designing software were the only users of the software. If I'm a programmer, of course I'm going to design my shell so I can write shell scripts. Of course I'm going to give myself the ability to create complicated hierarchies -- that's how I think!

    Now, we have a lot of users of software who are not programmers and geeks. If we care about them using technology, we need to think about the easiest thing for them. This doesn't mean we have to get rid of command lines, we just have to come up with something else for users who do not want (or need) command lines. This is not a zero-sum game. The growth and popularity of technology is a win for everyone.

    To make this happen, a portion of the software community has to realize that they are designing software for people who are not programmers and who are not (gasp!) interested in technology for technology's sake. Some of us (but not all) need to get rid of the attitude exemplified by the following quote:

    "Linux *is* user friendly. It's not idiot-friendly or fool-friendly!"

    The majority of users are not "idiots" or "fools". Some are doctors, lawyers or artists who have chosen to concentrate in an area outside of computers. Saying they are idiots because they don't understand symbolic links is like a eye surgeon saying that a programmer is an idiot because he can't remove a cataract.

  • Here on Windows2000, an exception is made, and a reasonable one, for scrollbars. Menus and buttons do not track outside the control, but scrollbars do, and definitely should. There are two reasons for this:

    1) The control moves, but it's restricted to only one orientation.
    2) The control is very narrow (in the orientation perpendicular to its movement).

    Moving a vertical scrollbar for any significant distance requires moving the pointer in a nearly perfect vertical line. Try it and watch how the pointer wanders from left to right. To be usable, the scrollbar should then track in order to accomodate the pointer's inevitable horizontal movement outside of its area. Imagine trying to move a 1-pixel wide scrollbar if it didn't track! (or just try XTerm with Emulate3Buttons with a pencil eraser-type mouse for a reasonable simulation!)

    Things are a little different with a menu. When you move the pointer up and down to select a menu item, the "control" (in this case, the menu highlighting) does not need to track because the menu itself is wide enough to accomodate any horizontal wander. Note however, that the the menu itself still tracks: when the mouse leaves its area, the highlighting disappears but not the menu itself. Why? It seems like this behavior is to enable nested menus. Chasing down a deeply nested menu item requires moving the pointer in a series of long horizontal lines punctuated by steps. The menu should track so that if even if you drift off of a menu, you can go back and continue from the last "step".

    I believe that KDE and/or GNOME do NOT track nested menus, and this can be VERY frustrating.

    Finally, when you click on a button, the button doesn't move, and neither does the pointer -- or at least it doesn't need to. Therefore no tracking. On Windows the button correctly pops back up when the pointer leaves.

    BTW, this is all off the top of my head, just from messing around on Windows...
  • How long does driving take to learn? ...to get behind the wheel of a potentially deadly vehicle, slap 'er in gear, and tear out across the interstate?

    It *does* take a while-- a few months, perhaps, to become proficient. Cars are simple because they have 4 or 5 controls, not counting simple things like lights and air conditioning. But it takes a few months to learn how to drive.

    Why is this? Could it be there is more to driving than just learning the function of 4 or 5 controls? Could it be that the interaction of other drivers fouls up the simple use of 4 or 5 controls?

    If a simple something with a single purpose, such as a car, takes weeks or months to learn how to handle properly, why shouldn't we expect the *same* learning curve for a multipurpose device like the computer?

    I don't expect my SO to know how to rebuild a carbuerator. But at the same time, I *do* expect her to use the turn signals at the appropriate time, and know how to obey speed limits. I also don't expect her to park her car in someone's bedroom.

    Similarly, I expect her to know how to find a simple file she created last year, but managed to misplace. I expect her to know how to open photos, and email said photos to friends.

    One day, when computers are sufficiently intelligent, we can simply talk to them as if they were Sigfrid von Shrink. Until that day, it is a mark of intellectual laziness to refuse to take the time to learn a general tool that is so important to our lives.
  • I haven't read the book yet, but I did attend a presentation by Raskin last week, and he gave an example of why your suggestion -- visual cues indicating the current mode -- doesn't work.

    As stated in the review, people can only consciously concentrate on one thing at a time. They will be concentrating on the task they wish to perform in vi, rather than on the current text colour and by extension, the mode the program is in.

    His specific example was of a study he did with experienced users of a particular CAD program. This program has several selection tools, all indicated by different, distinctive mouse pointers. Those are modes.

    Users invariably made the same error repeatedly, even with experience: they did something with a specialized selection tool, then moved to select an object with the regular selection tool without switching to it first, even though they were looking right at the visual mode indicator the whole time.

    The users were focussing on the task of selecting an object, not on what selection tool (mode) was currently enabled. Adding your visual cue to vi won't help any; people will still get it wrong because they won't be paying attention to the visual cue.

    Besides, wouldn't that mess up syntax highlighting?
    --
    Change is inevitable.

  • Grab cygnus toolset for win32 before you lose your mind. Hamilton c-shell is good too. Forget dos prompt, it's no replacement for a shell.

    The gui monkeys have misunderstood something:
    Hey language is inefficient... lets go back to scratching pictures in the dirt or hieroglyphics on the wall in order to communicate.

    The command line is a language. They are stuck on the "pictures are better than words" meme. Yeah, true, they are, until you learn to read that is.

    Visual stuff, GUIs, languages etc, are more *intuitive* in the same way that picture books are more intuitive to babies. It doesn't mean they are superior. This is why the best interfaces are a combination of GUI and language. It's just like the way you give children picture books while they are learning to read.

  • Ouch:
    at least if they wants hisprograms to have users
    I understand Slashdot is not a professional news organization, but this is pretty embarrassing.

    And as has been mentioned, the title in the summary box substitutes "Human" for "Humane".

    Maybe you should borrow an idea from XP and practice pair publishing ...


  • The logic seems to be

    1. He's pulling it down - he want's it to go down...

    2. He's pulling it further down - he want's it to go further down...

    3. He's still pulling it even further down - AHA he want's it to go up!

    This goes for Windows, Mac, KDE and probably a wholelotta other interfaces.


    I think that's windows only. I don't have a Mac at the moment but my KDE (or any other X application) doesn't do this.

    That feature annoys me a lot when working on windows just like the scrollbars popping back if you leave the area to any other side.

    Robert
  • More to the point, it sounds exactly like the Newton interface, which he mentions explicitly in the book.
  • Here is an oopen source implementation of a Zooming UI in Java: http://www.cs.umd.edu/hcil/jazz/

    ...richie

  • You know, I'm willing to bet that hermaphrodites are well-aware that people think of them as strange or weird without being called "freaks" in a large public forum completely non-related to sexuality or biology issues.

    Furthermore, I'm pretty sure that most gay people would not list a "hermaphroditic freak" as their distraction-of-choice.

    The traditional neutral phrase is "member of the appropriate sex", or MOTAS (straight from the Jargon File). Better still, leave the weird imagery out of future book reviews.

  • Squeak, the open source implementation of Smalltalk, comes with windows with scrollbars on the right. Also, there is a strange way to page up and down. It also has another number of unusual UI ideas. And they are difficult if you come from other environments!
    __
  • I've heard that the Canon Cat sold great, it was just the prize in a war between various divisions in the company, and was killed off due to politics.

    OTOH, it would not have been particularly expandable into any kind of general purpose computer. But hey - no one buys lots of things that are good ideas, it doesn't invalidate them.
  • He started the Mac project and introduced Apple to the concept of GUIs. He went to Canon and brought out a word processor that's generally agreed upon to have been very nice indeed.

    So, what precisely is he not doing about things? In the end, with both him and Tog and others, someone's got to listen to them and follow their advice - neither have the ability or inclination to code a whole fully-featured UI themselves. At least they remain influential. I know I'm paying attention in my own projects, insofar as I can.
  • I wish that I had the time to agree with you, but I have to get back to my email. Evidently I'm required to manually flip pixels on and off since that's the kind of thing that users need to know about in order to use computers.

    In truth, I think that learning about computers is useful only insofar as it's useful for some other pursuit. That pursuit _could_ be something else about computers, but generally it's not going to be. I really wish that we'd design cheap, rugged, simple computers for children and then proceed to spend some time from kindegarten through high school teaching them how to program and the kind of thinking that involves, basic electronics, etc.

    Because then we might actually _raise the bar_ of what constitutes general knowledge, and people wouldn't mind writing tiny programs because they would let them do even more of whatever work they were really interested in. Computers have the ability to become prostheses of the human mind in the way that literacy already is. (which was much bemoaned long ago for impairing the development of memory)

    A lot of that involves wrapping up boring repetitive problems into well-known solutions so that a minimum of programming knowledge is necessary, but generally it involves cultivating the attitude that these boxes are tools that any one can and ought to be able to reconfigure to suit their own needs, either alone or in collaboration.
  • Yeah, but why do people insist on having a command line and a GUI as seperate concepts with a wall between them?

    Why can't I type in a CLI command and get GUI output? Or make a GUI selection of spacially proximate icons and then add to it with regular expressions from the keyboard?

    Both are exceptionally useful, but neither one is as good alone as both could be together.
  • He's not out of touch with reality, reality just hasn't caught up.

    His Canon Cat word processor booted instantly, in 7 seconds. That is, it _appeared_ to come up instantly, thanks to his little trick, and others. (e.g. only going into a sleep mode and not totally powering down) Psychological studies had indicated that it takes people about 8 seconds to switch brain gears and begin actually consciously interacting. By booting in 7 and buffering the inputs, it was ready to go before the users were.

    Technologies such as hibernation - particularly rather granular ones, as well as just monolithically dumping a stored RAM dump of the system from disk to RAM - and sufficiently stable platforms so as to not need to worry about leaving systems up constantly could be used to make this a reality on the desktop.

    We're not there yet, but we could be and he actually was, just in a limited environment.
  • He knows. There's an entire screed on the benefits of habitual behavior. But the habits wind up malformed or don't come together at all unless they're widely applied.

    Try reversing the functions of your car's pedals every day - you'll find that it's good to have enough consistancy to form these kinds of habits.
  • Anything that saves users from having to waste time is needed, and will be taken advantage of frequently, although perhaps not consciously.

    A comparable example lies in the realm of monitors. CRTs used to take a couple of minutes to warm up and become usable. Old TVs were like this. A *lot* of work went into preheating the CRTs and developing CRTs that could run colder or warm faster. Now you can push the button and the picture comes up in seconds.

    If you really believe what you say, prove it - turn on your screen, go get a cup of coffee and wait three minutes before getting to work. Otherwise admit that this is a very important thing. Not the only important thing, but it is up there.
  • The two most powerful designed interfaces I use are my computer keyboard and my (digital) piano keyboard. To me they are intuitive, but that's because I use them every day. In both cases I am rarely doing one thing at time. In both cases no one has devised a mechanism which improves significantly on them---in performing the same tasks, that is.

    However, although they nod towards the "human"---the key sizes relate to the size of human fingers---neither interface could in anyway be described as "humane", in the sense referred to in the book, especially from the point of view of the new user.

    This is the nature of life. Life is difficult. Life is hard work. (If you'd heard my piano playing you'd know how painfully true this is. But this is only a matter of time; practice makes perfect.)

    The point I am trying to make is that the most powerful and popular interfaces are the ones which offer maximum speed and control---not the ones which have the shallowest learning curve, not the ones which focus on one task at a time, not the ones which confuse the user least.

    Remember learning to walk, or learning to invert your vision? Quite a few million years of evolution went into "designing" the interfaces between your brain and your legs/eyes. It still hurts learning to use them. But look at the pay-off. I mean literally...

  • This definitely used to happen in Windows 3.1, and I think it still happened in Windows 95 and NT 4. I think it must be a recent (and long overdue) fix.
  • One can just modify the keyboard mapping to disable Caps Lock. I haven't figured out how to do that in Windows 2000 yet, but I have turned on the "Accessibility Option" of "ToggleKeys" which makes the computer beep when I hit (Caps|Num|Scroll) Lock. It's not ideal but it lets me know my mistake straight away.
  • This trend is going full circle with the proliferation of applications to which various 'skins' can be applied, producing an entirely new look. This trend has appeared, I believe as a direct result of the advent of dynamic layout methodologies similar to HTML. Let's all remember it wasn't always so.

    There are gadget libraries on the Amiga - MUI and BGUI - that have been doing dynamic layout since 1992 when HTML was new (and maybe didn't yet have forms). It always annoyed me that I couldn't easily do this when building a Windows GUI (which, thankfully I haven't had to do for a while now).

    MUI allows a lot of user configuration of both look and feel - globally and specifically for individual applications. It's not exactly skinning, but because so many Amiga applications use MUI this is a much more powerful feature. You can see an example of customised MUI windows [sasg.com] on its web site.

  • I don't think you can reverse an F1 car.
  • You can change the look and feel of most types of widget independently.
  • Have you looked at XMLterm [xmlterm.com]? It's a strange hybrid between a web browser and an xterm. You can use it with 'pagelets' such as xls, which is like ls(1) but produces HTML where each filename is a clickable link - so you have a simple directory browser in your command line window. Also xcat can display many file formats directly in the xmlterm window (anything Mozilla can display, essentially).

    I don't know whether this sort of thing will take off, but it's certainly worth a look as a possible way of combining CLI and GUI.

  • I'm not sure how you'd organise your documents though.

    Possibly through a set of filters/queries? With no filter, *everything* on your system would be presented. You could create filters to specify files by date (creation/modification), "subject" (whatever you decide that should be), type, or a variety of user-specified attributes. Filters would have the ability to be combined with AND and OR operators. NTFS has some of these capabilities, but no support as far as I can tell, I understand BeOS (or its file browser) has something like it.
  • It's "humane interface".
  • For commands to become habitual as quickly as possible some interface-guidelines are given.


    No, for commands to become habitual you need to practice them. People have a difficult time learning the intricacies of vi because they don't use it for 100% of their text editing. Once I started thinking about every command I entered in vi before doing it (such as hitting 10j instead of the down arrow 10 times, or the various ex commands) they quickly became habitual. No interface is intuitive automatically (except, of course, the nipple :-) ). True, some are easier, but those that are the most powerful are usually those that require the most effort to learn.



    This doesn't just apply to vi, of course, but anything sufficiently complex on a computer. Stick with one way and learn it.

  • As a new owner of a PalmOS device, I couldn't disgree more.

    The palm is very simple. It doesn't cover a lot of ground. You can select buttons, you can enter text, you can choose menus, and the like. This leaves no excuse for the laundry list of issues I've had with my device so far. To list a few:

    • Why do I have datebook and datebook+? What is the difference? Why do I have both? If they're redundant, and I can use one or othe other, why can't I delete Datebook, which seems to do less. Do they share data? They seem to. Is all data accessible from both, or only some?
    • The Welcome application seems useless after the first runthrough. I don't need to be taught about the graffiti again. Why can't I remove it like I can some other applications? I've actually created a category called 'trash heap' for application icons I never plan to use again but can't make go away. This kludge on my part is largely defeated by the default 'All' category mentioned below.
    • Why are some of the graffiti strokes so nonsensical? A colon is a downup stroke. A stroke to the right enters a space, while a stroke to the left erases text? To enter a colon I have to start at the top, strike down and back up again? That bears no relation to a colon at all.
    • If the menu makes a menu bar appear at the top of the screen, and nothing else is up there normally, why doesn't clicking at the top of the screen activate the menu? The net result of this is that a lot of functionality of the applications is effectively "hidden" so that it is difficult to learn.
    • Why are there (i) help icons only on the screens I don't need any help with, but none for the elements I find confusing?
    • The buttons available surrounding the graffiti area seem extremely arbitrary. One lets me select applications, one brings up menus, one brings up the calculator, and one is a search tool. There is no way to meaningfully distinguish them other than memorization.
    • If i can select categories for my applications, why does PalmOS insist on starting me off with the "ALL" category which destroys any utility of these categories by presenting all applications? Why can't i delete the "all" category?
    • Why is a button devoted solely to bringing up the calculator? I don't have any desire to even HAVE a calculator application.
    • Speaking of modal issues, if i accidentally tap instead of stroke, how do I cancel the punctuation entry? So far my only solution is to enter the bogus punctuation and then delete it.
    • Is it possible to duplicate a record, so that I can create a fillout document and then fill it out several times? This question actually stumped 5 employees of Palm Computing Inc.
    • When in the text editor in a created document, i can tap the stylus anywhere in the text to move the insertion point. If i tap the stylus somewhere outside the text, the insertion point does not move. The technical side of me knows that there is no 'document' or whitespace characters reaching that point, but there is no visual distinguishment at all, and i can't exactly hold down the spacebar to get the cursor over there, so what's with the restriction? I'm not editing source code here, it could be a little less strict on document structure.
    • This is a bit farther out, but I think reasonable: Why can't i combine functionality? Why can't a memo contain several todo items, or refer to them? If i'm taking notes on a project, why can't i mark sections of the project done that I'm working on in a manner that has any association?

    You may not agree with all these issues, but I believe you will find most of them stand. The Palm device, for something so trivial, suffers from far more than its share of usability issues.

  • show me a succesful application based on raskin's principles and i'll buy this fawning fanboy crap. good ideas are a dime a dozen. proven ideas aren't. on that note, given the proliferation of "GUI Bloopers" type books and sites out there like the interface hall of shame [iarchitect.com] it sure would be nice if people managed to just invest a little quality in existing GUI design principles. The start menu in windows, for example, violates microsoft's own design guidelines on menus (not to cascade more than one level)
    --
  • it only becomes intuitive when you've done it a thousand times. (see that nipple quote in someone else's .sig).

    So go ahead, find the 'one and only one, obvious way' to achieve everything you need to do to generate your content. I'll still be here trying to :wq in a web form, and performing searches with / in pine.

    Oh and vi (or at least vim) has incremental searches too:

    :set incsearch



  • Naturally, a review can't build up a supporting argument for the points made in the book. Therefore it's no suprise that several posters have refuted some of Raskin's proposals using arguments that Raskin addressed in his book. Here's a summary of counterpoints to several of the most common responses I found in this discussion. Also tossed in are an occasional citation from The Invisible Computer [amazon.com] by Donald Norman. It is an excellent companion to Raskin's book.

    A common point. But buried behind it is another fact. We are all different. What is humane and efficient for one person is a nightmare for another.

    and

    I think something that a lot of people miss is the fact that the average computer user is very different from the average Slashdot reader or Linux user.

    Human beings are far more similar cognetively (psychologically) than they are different. Before we worry about designing interfaces that are optimized to those differences, we must first develop interfaces that are best suited to the %99.99 ways we think and behave alike. One example cited in the book is that human beings can only concentrate on one thing at a time. Raskin writes that the current state of user interfaces is nowhere close to being ready for the minor tweaks to our differences.

    Someone else's idea of "the most natural way to do x" often times isn't mine. I guess that's why I always set custom keys in games, use emacs, and think PERL is fun to hack around with.

    The above comment was arguing against having a Single Way to Do It. It's partially refuted by the previous counterargument. It's further refuted by Raskin's coverage of "habituation". The degree to which an interface becomes easy to use is directly proportional to the degree to which users can develop habits using the system. Having more than one way to do it prevents or slows down habituation by forcing the user to make a concious choice about which way to do it. The UI is also complicated by increasing the number of commands in the system.

    However, i don't think that the thing [another poster] is saying and the thing Raskin is saying are mutually exclusive.

    In an ideal world, there should be only one way to do it, BUT the USER should be able to determine what this one way is.

    Actually, they are mutually exclusive. Having a "single way to do it" means having a single, system-wide command to perform an operation. See above counterargument for why a single command is good. Furthermore, if having the user "choose" involves setting a preference that changes the way the system behaves, then you've created a mode. Modes are bad, m'kay?

    Intuitive

    Several posts talk about "intuitive" interfaces. Raskin points out that there is no such thing. People usually mean "familiar" when they talk about intuitiveness, i.e. a UI that is like another UI they have already learned or become habituated to.

    One problem ... in the book he talks a lot about products like this Canon word processor that didn't seem like commercial successes. They may have been "humane" or even efficient, but no one bought them. What good is that?

    In the The Invisible Computer Norman writes that there are 3 equally necessary legs to support a product: Technical, User Experience, and Marketing. Canon Cat had two of the three licked, but failed on marketing. That doesn't diminish the value of it technical and user experience achievements.

    I might buy the book just to see what he means by "no file hierarchies". I've read about this book before, and that was what stuck out the last time as well.

    Yes, buy the book. Raskin's proposal for an interface without file hierarchies, file managers, or "files" per se is my favorite part. He addresses categorizing, organization, and finding documents.

    The thing is, I can't imagine an interface that is 100% intuitive to use, without requiring the user to think about details of the interface, and simultaneously allows the sort of massive power you'd get from, say, putting together a five-command Unix pipeline that picks out all the lines that appear exactly twice in a text file. So if you aren't allowed separate beginner and expert modes, what can you possibly do?

    This one's hard to address completely in a single paragraph. In short: efficient interfaces that allow habituation are both friendly and capable of expert use. Raskin dispells the notion that "user friendly" means "dumbed down". A humane interface is usable and powerful. A much better argument is in the book.

    1 app with 1000 features versus 1000 apps with 1 features

    I'm sure there has to be a happy medium in this somewhere. [Another poster] sites Word as a program that tries to do everything, to the point that you get lost in the features. I agree. But on the other extreme, I have my latest Linux-Mandrake CD set with thousands of applications. Most of these apps only have one use.

    Actually, Raskin proposes something more like 0 applications, 1 system, N number of user created documents, 100's of commands that work in all documents. As an example he writes about the "calculate" command. Say a user wants to type in a formula and calculate the result. In contemporary systems, the user must run the spreadsheet application, type the formula into one of the approved formula entry text areas, and execute the formula. Raskin proposes that the user should be able to select any text, anywhere, and perform a "calculate" command on it (perhaps by pressing "calculate" on their keyboard). The selection is replaced with the result (which could be "can't calculate", "Not a Number", "Divide by Zero", etc.). The original formula is preserved in the document. So calculating a formula is reduced to: 1) type in formula 2) select formula 3) calculate selection.

    Apple's OpenDoc was a step in this direction. You started with a document, and then invoked the editors to create the different pieces of your document. I think it was heading away from the application-centric approach, but like a lot of Apple projects of the mid-nineties, it got toasted before it really got off the ground.

    Raskin directly addresses OpenDoc. OpenDoc and OLE (and it's ancestors) are still the wrong approach. The "different editors" are still applications, just disguised. It still has the same mode problems as seperate applications depending on which part of the document I'm editing, i.e. "am I in spreadsheet mode right now, or am I word processing mode, or am I in paint program mode?".

    Can you imagine trying to play Quake using a MS word (or emacs.. or vi.. or pretty much anything not remotely related to 3d navigation) interface? Or for that matter, trying to type out that document the boss wants in 20 minutes if you have to use a mouse/keyboard combination to pick letters from your latest Quake/text editor mix?

    Citing The Invisible Computer again, Norman would say that these should be seperate devices. That is, one device for the creation and navigation of content and information. Another device for playing games. Norman proposes abandoning the general purpose computer and instead having multiple, less expensive devices focused on a single activity.

  • > all visionaries are subject to much abuse.

    And therefore, all those that are subject to abuse must be visionaries? MicroSoft must therefore truly be the most innovative company out there.

    True, they laughed at the Wright brothers and they laughed at Marconi. They also laughed at Bozo the Clown...

    I take solace in knowing that if I'm modded down, it will only because I, too, am a visionary.
  • I think something that a lot of people miss is the fact that the average computer user is very different from the average Slashdot reader or Linux user.

    Absolutely true. Which is why I cannot agree with Raskin's assertion that there should only be one way to do something. (Or at least the reviewer claims that he makes such an assertion. I haven't read the book myself.)

  • > So the Palm forces you to adapt to the machine while the Newton tried to adapt itself to you.

    That was the exact point of the quote. Mr. Levy did not like the idea of Graffiti. :)
  • by mcc (14761)
    What on earth are you talking about?

    The command line model does not dictate *anything* about the way a program parses its elements. The things that you type after the program's name are passed to the program as an array of strings, and may be parsed by the program in *literally* any way it wants. Moreover the program may choose to ignore its command line elements and take over the screen, if it so chooses using the curses library or something to create an entire kind of text-based gui.

    Moreover the syntax of - flags is not anywhere NEAR standard. I could give you at least 12 ways the program could choose to parse the way the - flags are passed, which is impressive when you consider how simple the damn thing is, and frustrating if upon using a certain program you realize that only one of those 12 ways and you don't know which.

    The unix CLI is at least as flexible as X; it just isn't as poorly designed or ugly ^_^ but there are NO constants.

    Amen to your last paragraph, however, very much.

    /me bows
  • by mcc (14761) <amcclure@purdue.edu> on Thursday May 17, 2001 @08:32AM (#216502) Homepage
    > The really controversial idea, though, is to abandon applications altogether. There would be only the system, with its commands, and the users' content

    This sounds like opendoc. Does Raskin actually mention that opendoc is what he's talking about? Is opendoc what he's talking about?

    How does Raskin propose that this state he advocates in his book be reached? If he wants to follow the opendoc model of applications being split up into as many tiny interlocking pieces as possible, with a framework for pieces from disparate apps being allowed to interlock with one another as long as they operate on the same kinds of data, then how does he propose that the parts be coordinated together in such a way that they appear, as he wants them to, modeless and consistent?

    Basically what i'm asking is this: The things he proposes (a single modeless system that does EVERYTHING and in which every bit of interface is consistent with every other) are things which are extremely difficult to achieve unless every bit of said system is written by the same company. Does he suggest any way that disparate groups could work on the creation of a system such as he proposes without the different voices involved ruining the purity of the whole thing-- like, the people writing apps start doing their own thing, and eventually the differnet parts of the system become inconsistent again.

    I also would be curious to see what Raskin would think of the mac os x "services" model, which attempts to revive the opendoc idea with in a less extreme-marxism manner-- applications can rewrite parts of themselves as "services" that do certain actions on certain types of data. If the user is using a program which uses a type of data that a service the user has installed can work with, the user is allowed to let the service wrest control of the data from the application and operate on it. Is this good because it's a step in the right direction, or bad because unlike opendoc the focus remains on the application and not the document?
  • by mcc (14761) <amcclure@purdue.edu> on Thursday May 17, 2001 @08:52AM (#216503) Homepage
    What you say is very true. However, i don't think that the thing you are saying and the thing Raskin is saying are mutually exclusive.

    In an ideal world, there should be only one way to do it, BUT the USER should be able to determine what this one way is.

    In my humble opinion, the direction that GUI design needs to and, inevetably-- i have no idea when, but it HAS to go this way eventually-- will go, is in the direction of infinite modularity: the direction of seperating style from content, seperating form from function. Up until this point, interface design has been a constant war between the macintosh "all applications should act roughly consistently with one another" mindset, where you take a single set of guidelines and stick with them, and the Xwindows/UNIX "interface guidelines are fascism" mindset. The UNIX mindset has an extremely good point-- but makes the mistake that it just trades off apple's single point of interface fascism for a billion tiny points of interface fascism, one for the author of each application. The author of each application is still the one in control, and is in control only of the application they created. The user is in control of nothing. And from the user's standpoint, being powerless over the behavior of a bunch of applications that all act the same (as on the mac) is better than being powerless over the behavior bunch of applications that all act differently (as on UNIX).

    Ideally, the person who dictates interface behavior would be neither Apple nor the application designer, but the user. Ideally Apple and the application designer would both offer *suggestions* as to how the application would behave, and the user would determine which, if either, of these suggestions the application would follow.

    Ideally, eventually we will move to where programmers don't say "i want a text field, two buttons, and a menu with these specific sizes and positions", they will say "i want a text field that you can do these two specific things to and which you have these options for, and the text field and the potential buttons and the potential menu have these specific relationships to each other", and the system will build the interface based on the rules the user prefers.

    Hmm. I'm kind of rambling now, and i have to go. But how i was going to end this is by saying that the direction things should take from here, and the direction things ARE taking from here (at least with the things that GNOME and apple are putting forth) is that common things like text fields and scrollbars should be done by a single systemwide service, but *abstractly*-- such that the user can augument the behavior of those text fields or whatever at will; and that we should move toward the model used by CSS, the glade files of the GNOME api, and the nib files of apple's Cocoa API-- where you define functions, and then define an interface, and then link the two together. I, the user, can open up the nib files inside of these mac os x applications i use, and because the relationship between interface and function is abstract rather than hardwired into code, i can rewire that relationship-- i can delete, rearrange, and redesign the function of interface elements of existing programs in InterfaceBuilder despite not having the source code to the program. This is the way things should be.

    OK.. i'm done. thanks for listening :)

    "I have a firm belief that computers should conform themselves to my behavior, and not the other way around." --Steven Levy, on the original Newton release of the Graffiti pen input system now used in the palmpilot

  • "Linux *is* user friendly. It's not idiot-friendly or fool-friendly!"
    The majority of users are not "idiots" or "fools". Some are doctors, lawyers or artists who have chosen to concentrate in an area outside of computers. Saying they are idiots because they don't understand symbolic links is like a eye surgeon saying that a programmer is an idiot because he can't remove a cataract.

    I've had to deal with this kind of situation before and your example is flawed. The people that this statement refers to are people who refuse to learn, either through direct teaching or through experience. Many times these are the same people who refuse to follow detailed instructions and are then confused and indignant when things don't work. Also the people who I am thinking of will use a software package for years and never attempt to learn more about the package or become more efficient using it.

    My second point is that comparing the effort to understand symlinks with the ability to perform eye surgery is really lopsided. I can, hopefully, explain the concepts of symlinks to anyone who isn't a vegetable and is willing to learn in 5 minutes, it takes years of study and hard work to become even basicly competant in the medical field.

    Computers are not all Magic Faries and Pixie Dust!!

  • Ding, Ding goes the bell!!

    I wholeheartedly and emphatically agree. I was possibly a bit too inflamatory in my initial post but this is exactally the kind of thing I would like to see. A computer is a tool, and the full power of that tool should be available to the end user. It should be reasonably unobtrusive but allow for continuous learning and improvement, I learn new things every day that allow me to be a more efficient and accurate computer user/admin.

  • Note: I wrote much less imflamatory posts above and below. Check them out as well if this just makes you mad.

    Also Note: I have little respect for people who refuse to learn anything about the world around them.

    Guess I'm an idiot for not taking time out of my busy schedule to explore all the things my hammer and screwdriver can do.

    No, that's not the point, the point is that you are an idiot, IMHO, if you continually use your screwdriver to pound nails even after someone else takes time out of their busy schedule to show you how to use a hammer.

    A computer is a TOOL. Tools are usually considered to be ease work or accomplish something. The point should not be to use the tool but to do something productive with the tool.

    Now this is something that I agree with, I just wish that this gem of a statement wasn't surounded with such garbage.

    So, pray tell, why should my laywer budy understand what a symlink is and why it's better than a shortcut in Windows when all he wants to do is write a cease and desist letter?

    Well, this is a really bad example, even I haven't had cause to use symlinks in my home directory. I usually use symlinks as an administrative tool. A better example for our hypothetical laywer would be one who refuses to learn how to use templates or mailmerge in their wordprocessor, even though these features would help immensely with their common task load. I've seen reasonably intelligent people who use spaces and tabs in their wordprocessor to center text instead of using the center feature, or manually making large numbers of changes instead of using search-and-replace.

    The weird thing is that since it involves a computer it is socially acceptable to be incompetant. If these people brought this same attitude to other aspects of their job they would be fired on the spot.

    "What, you've been here for three months and you still can't file an order! Please be out of your desk by 3PM."

    Contrast:

    What, you've been here for three months and you still reply to email by retyping the complete text of the previous message into a Word document, attaching it to a new message and sending it to everyone in the complete address book because you can't remember who you need to send it to. Oh and you a secretary who is a hunt-and-peck typist who only gets 10 WPM. Poor baby, computers are
    so hard.

    Note: The above is a gross exaggeration, but I've seen that kind of attitude before and it is quite frustrating. When I was working in a job completely unrelated to systems administration I used to have people come up to me all the time and ask about basic features in their office productivity software. My job didn't even require me to use the software much, I just enjoyed learning, while their jobs required them to make several slides every day, sometimes with related documentation. It gets frustrating when you are asked the same questions every day, "How do I line up multiple objects onscreen", "What is that search-and-replace thingy you were talking about", "How do I log into this computer again", "What's a network", "I seem to have two drive letters pointing to the same shared area on the fileserver. I didn't want that so I deleted all the files on one of the drives but they seem to have dissappeared from the other wone as well. There must be something wrong with my computer, fix it!", "Backups, what are backups?"

    Grrrrrr. Sorry for venting some old, repressed bile.

  • I've said this before, but I'm not sure it sticks yet. I think that Raskin has a good point about consistency across system interface. However, I've come to beleive that UI is not the bugaboo of human-computer interaction. The real problem is system configuration/administration.

    My parents, for example, are competent using any number of applications (all with varying purposes and slightly different UIs). But ask them to change any system setting, and they will either give you a blank look or freak out. They don't have the faintest idea of how to start. They're even wary of navigating control panels until they find the right tab/checkbox.

    Fair enough, right? The big realization came when I realized that I'm afraid of system administration too. Especially when it comes to unix systems. Even with all those neat-o little configuration tools that come with Linux now, it can be a nightmare to setup X or networking if things aren't just how you found them.

    Compared to these sorts of trials, learning to type the right commands or navigate a heirarchy of menus is easy. Most humans are born with the ability to pick up language, so typing commands isn't too much of a stretch. Pointing and clicking isn't hard either. What we're not equiped to do is manage a lot of detail, or absorb a lot of underlying principles quickly. Until someone manages to address those concerns, UI may be great but human-computer interaction will not move far forward.

    --
  • We do it for cars, why is it wrong for computers?

    Because the number of people killed by incompetent computer usage is vanishingly small.

    -jon

  • A common point. But buried behind it is another fact. We are all different. What is humane and efficient for one person is a nightmare for another. Mac GUI is a god send for some and is the anti-christ for others. When I am doing desktop work, windows (X or MS) is a comfortable environment in which to work. But, when I am doing server administration, text is the way to go.So even when we talk about a single individual, what is good and intuitive for one task is a nightmare for another.
  • I've read this book when it first came out. I think it is great! Well-written and thought provoking. It makes me want to try a Cannon Cat, a relatively successful product that Jef shaped. I've thought about programming a modern user environment with these ideas, but I haven't much time to devote to it. Perhaps someone out there does!

    But my biggest question after reading and agreeing with much of the book is: How the hell does it apply to business applications? I can see how Jef's ideas work well for content-creation like writing, emailing, and drawing. I don't see yet a good way to apply LEAP and non-modal interfaces to typical repetitive business applications like order entry, general ledger, and so on... especially in a networked, enterprise setting. You can't possibly have dedicated keys for every application function, and the "highlight command and press execute" gets you back to a variation on command-line interfaces or menus. Modality or navigation inefficiencies seem to creep back in!

    Any thoughts on this among other readers of _Humane Interface_? Jef, are you lurking?
    --
    Mike
  • Having strictly one way to do each task in an application limits accessibility for the disabled. This limitation might be solved, however, by violating one of his other guidelines - having more than one mode - one could have a Keyboard-Only mode or a Mouse-Only mode, for example.

    Or maybe better, allow users to have their own, potentially system-wide interface configuration, with an interface API that every program receives keyboard, mouse, or other input device events from.
  • by HamNRye (20218) on Thursday May 17, 2001 @03:21PM (#216512) Homepage
    While the Flamethrowers may be out of line, I do have some issues with some of his beliefs.

    Most importantly, Raskin has a theory on UI and an idealized view of what can be accomplished. Neither of these can be viewed as realistic. An old axiom that comes to mind is: "The best way to make software user-friendly is to limit options."

    Sure, let's do away with Beginner and Advanced interfaces to computers... How? Just do away with the advanced interface. Let's do away with Perl and its TMTOWTDI core belief.

    Ideally, the computer should learn as it goes along. This is somewhat possible even with the "!grep" syntax of the shells. And even aliasing. When I type "!grep" it remembers the last grep pattern I did. What if grep became more intelligent. So I could say.... "grep messages.log" and it would remember the string I last searched that file for and options.

    The next step in UI development is a UI that will learn from its user and adapt. Not a UI that tries to simplify the entire computing experience so that it is manageable by anyone with an IQ over 75. This was the initial design of the Mac OS, and one of it's worst features IMHO.

    I'm reminded of VCR plus, the "make it easy to record programs" breakthrough of the '90's. VCR programming was too hard for most people it was reasoned, why not make it easier. Well, sure enough the problems with VCR plus were worse than trying to learn to set the recorder. And the "User-Friendliness" (aka. lack of telling you what's going on) made you reluctant to use it anyway, because you didn't know whether it would work or not until after it was supposed to have worked.

    Modeless????? Well, the original modes came out of the fact that noone could decide how these things should work, some liked to insert, some overwrite. My mother has never switched modes, but I do 3-4 times an hour. Again, the answer is to do away with insert or overstrike, or devise something even more onerous that will not look like a mode but still accomplish the same thing I can do with one button right now. Perhaps Raskin would also like to do away with "Caps Lock" in the process. (Yes, that is also a mode.)

    While there are some good points, the S/N ratio is well over 50.... And it is many times Raskin himself who is unwilling or unable to give up or reconsider what he thinks he "knows" as truths, often to his own detriment. Much of it stemming from the beliefs he fostered as an Apple developer.

    ~Hammy
  • I can see the problems with the huge, multi-level file hierarchies that are present in Windows, Unix and every other system under the sun. People just can't keep track of thousands of files organized in hundreds of directories in a big tree structure. So far, that's the only way to organize the googlebytes of data a typical computer has, and it works poorly.

    Windows, the Mac, and the X GUIs GNOME and KDE address this problem by hiding the filesystem whenever it can. In Windows, you click on an icon or navigate the Start menu to start a program instead of finding the executable foo.exe somewhere in c:\Program Files\foo\. Unfortunately, the filesystem still rears its ugly head frequently, and forces people to wander through it.

    Maybe a database model would work better than the traditional hierarchical file system for managing all our data - instead of having a tree of files in subdirectories, have a large database that can be queried using SQL-style commands by the geeks among us, and GUIs capable of Doing The Right Thing for every piece of data in the database by using type information in the tables to put things in the right context. Programs would end up in the program table, word processor documents would end up in the document table, and the system would know instantly what it's dealing with when it looks in a table, and if the database is set up correctly, most searches can be made very quickly and be more likely to return useful results.

    OK, its a crazy idea that has the potential of being a hundred times more complicated than hierarchical file systems. Anyone else have any ideas?

  • I'm pretty sure NeXT had the scroll bars on the left, for this very reason.

    As did Smalltalk, where the scroll bars could also "flop out" only when the window was active (so that they didn't take up any space when it was inactive).

    This is still an option in Squeak. You can see some examples of it in the Squeak screenshot gallery at: http://minnow.cc.gatech.edu/squeak/683
  • Windows, the Mac, and the X GUIs GNOME and KDE address this problem by hiding the filesystem whenever it can. In Windows, you click on an icon or navigate the Start menu to start a program instead of finding the executable foo.exe somewhere in c:\Program Files\foo\. Unfortunately, the filesystem still rears its ugly head frequently, and forces people to wander through it.

    Mac OS (Classic, not X) does not do that. It does the exact opposite: put the filesystem in plain, obvious, tangible view. The user actually manipulates files and folders, and never touches an abstraction like the Start menu, $PATH or whatever. That's one of the nice things with Mac OS (again, not X), in my experience: you really feel in control of the filesystem.

    Mac OS X OTOH hides much of the filesystem and brings Start menu-like abstractions. A nice touch is the package system, where an app (or lib, or whatever) behaves like a single file, yet actually is a whole hierarchy of files and dirs for the OS. Makes packaging complex apps really nice, and allows drag-and-drop installs just like the Old Days.

  • good pt. about learning applications. repeating the same tasks over time should not take a linear cost in human interaction ... but rather the cost should start to drop over time. one thing that i think important in featutures like this (application learning) is that the implementation needs to be shared, and not propietary to each tool. ie., a learning library could be part of a window manager, toolkit, or even os ... but hopefully not unique to each tool.
  • so when i paint a new picture in real life, where do i put it? do i put it on the shelf in the den, or do i file it in the basement with my college art? real life models in many places where things need places to exist ... what computers may be able to do for us is improve/automate the process. hiding the fact that things go places may not actually help.
  • what kind of geek are you? AFRIAD of system administration? what can't read XF86Config?
    were do you live. we're coming be to lock you in a room till you get over this...

    bet you dont even use vi...
    nmarshall

    The law is that which it boldly asserted and plausibly maintained..
  • Considering that Raskin could be considered the father of the Apple Macintosh, I think it's a little unfair to say he hasn't done anything for making user interfaces beter.
  • So, you NEVER make mistakes in vi? You never forget that you are are not in insert mode and start typing? You never accidentally use CAPS LOCK when inserting some caps text and then when you are back in command mode, start executing the shifted commands instead of the regular commands? If you never make a mistake, is it because you have to stop and think about it?
    And then there is the fact that if you type 10 (as in your 10j example) and then change you mind and delete the current line (dd), you will have to stop and think whether you are in a mode before you hit esc. Otherwise you will delete 10 lines instead of one.

    I use vi all the time, and I don't usually make mistakes, but when I do, it's always a modal mistake. (Even worse is a co-worker who uses caps lock just to type a capital letter. Then he uses vi. A lethal combination. It's painful to watch.)

    Commands are gestures. If you are concentrating on your text and not the interface (after the gestures have become habituated) you won't have to think about the interface at all. You will execute the command before you consciously decide what buttons to push.

    You are right, vi can become habituated. Once I am outside of vi, in other places I try to use all the same commands. I use ksh set to vi-editing mode so I can edit the command line. Sometimes when I am working on a particularly gnarly command I hit "ESC:wq" to "save" it, which of course doesn't work. Just another mode error. I'm in ksh, not vi. No big deal, just another mode error. But it is a distraction.

    Read the book. It's good stuff. Or at least read the summary on Jef Raskin's web site.
  • by drivers (45076) on Thursday May 17, 2001 @08:08AM (#216527)
    I heartily recommend this book. Jef Raskin is a highly misunderstood HCI expert. I say that because about 6 months ago he got a lot of flames for criticizing Apple for continuing to make the mistakes which he preaches against inherent in the WIMP (windows, icons, mouse, pointer) interface. Raskin himself replied and tried to explain some things. He said that to understand him you had to read his book.
    I bought the book soon after that, and as I read it, I was blown away. Sometimes when you read a book, it just gets into your head and it's all you can think about. That's how it was for me.
    Unfortunately, although it describes many of the principles by which a Humane interface should be designed, there is not a design for a specific kind of interface. Perhaps it's because there is no one single right way to make an interface, but there are many wrong ways. Software producers continue to make the same mistakes about what they think is user-friendly (yes, including GNOME and KDE by following the WIMP example), but Raskin shows that many of the usual assumptions are wrong (pretty much everything we currently understand about user interfaces, e.g. "icons are user friendly").
    After reading it, I felt that if I followed the principles of the book, I too could design a radically different yet vastly improved system for beginners and experts alike. I emailed Raskin with my thoughts. The response to the possibility that I could program such a thing was (paraphrased) "You will need a spec, which I am still working on."
    I suppose I am still interested in making an interface with the principles outlined in the book, but I think it would require as much work as a GNOME or KDE project (including applications), perhaps even an entire new operating system, depending on how far you wanted to take it.

    Jef Raskin's homepage is here [jefraskin.com]. Be sure to check out the summary of The Humane Interface [jefraskin.com] at least, if you aren't going to read the book.
  • so when i paint a new picture in real life, where do i put it? do i put it on the shelf in the den, or do i file it in the basement with my college art? real life models in many places where things need places to exist ... what computers may be able to do for us is improve/automate the process. hiding the fact that things go places may not actually help.

    This is only moderately true even in the real world, and not at all true for computers.

    If you paint one picture, you can put it one place. If you make ten prints of one lithograph, you can put them ten places, and none is intrinsically more important than the other. And what if you're a poet? You think of a poem, so it's filed in one place: your head. Then you teach it to a friend. And then you recite it at a public reading. Where is your poem "filed" now?

    Computers are even worse. When I create an image with Photoshop, where is it filed? Well, in truth, it exists only as a bunch of variances in magnetic fields on a number of spots scattered across several metal plates inside one or more metal boxes inside my computer. Unless, of course, I'm using a network, in which case it's inside computers that I may not know the location of and may never see. That's the only "fact" about where it is. Hiding the fact that that things go places is more than helpful; it's vital. If I had to specify the exact physical locations of the components of my Photoshop file, it would be ridiculous.

    That's not to say that making things work like they physical world is wrong. The metaphor of a desk, complete with a desktop, filing cabinets, folders, and so on made the Mac useful to a lot of people. But there's no need to stick with a metaphor past the point where it is useful.

    The web is a great example of this. Thanks to Google, I rarely bookmark anything; I just search for it. And if I do remember something about the URL, I usually remember only the domain name, finding the actual document I'm after via navigation bars and search forms. Would it be possible to force this into a real-world metaphor, where each "thing" is in exactly one "place"? Sure. Would it be useful? Absolutely not.
  • by dubl-u (51156) <[2523987012] [at] [pota.to]> on Thursday May 17, 2001 @09:44AM (#216531)
    Categories and subcategories map well to the real world. If I have only a few files (documents, whatever) they can be in one place. But if I have many, then I?ll want them to be organized somehow, and hierarchically makes a lot of sense for that.

    That's nearly true. Categories and subcategories map well to a particular view of the world. But to make effective use of the hierarchy, you are forced to use that view of the world.

    Take an example. Say I create an Excel spreadsheet for a 2002 budget projection for a project for one of my clients. What folder and subfolder do I put it in? The top level of the hierarchy could be my role (personal documents vs business documents); it could be the entity the document belongs to (the client's name); it could be the topic of the document (a budget); the period it covers (2002); the type of document (a spreadsheet); the application used to create it (Excel); the date it was created (2001-05-17); or the name of the project. Or something else entirely.

    So which is my top-level folder and which are subfolders? It depends on which I consider most "important" or "intuitive", which varies from person to person and from day to day. Heck, if you grew up with Windows you may believe the best place for an Excel document is in the C:\Program Files\Excel directory. I know secretaries who keep all their files right in with the program files because they never learned to make or change directories.

    I haven't read Raskin's book yet, but when I dream of better ways to do this, I imagine history lists, search boxes, and hierarchies with multiple roots and automatic filing of documents based on things like name, date, creator, type, and keywords and categories chosen by the author. So when I'm looking for that budget projection, I can browse based on any of those criteria I mentioned above.
  • I posted something on this on a thread yesterday which wasn't so informative so here goes more information on this product I saw on Scientific American.
    Input devices will have to miniaturize as well and become more direct, intuitive and able to be used while your hands (and part of your attention) are engaged elsewhere.


    The Cyberlink System represents this next step in the evolution of the human-computer input interface. The Cyberlink System is a BrainBody actuated control technology that combines eye-movement, facial muscle, and brain wave bio-potentials detected at the user's forehead to generate computer inputs that can be used for a variety of tasks and recreations.

    skip a paragraph

    The forehead is a convenient, noninvasive measuring site rich in a variety of bio-potentials. Signals detected by three plastic sensors in a headband are sent to a Cyberlink interface box which contains a bio-amplifier and signal processor. The interface box connects to the PC computer's serial port. The forehead signals are amplified, digitized and translated by a patented decoding algorithm into multiple command signals, creating an efficient, intuitive
    and easily learned hands-free control interface.

    Three different types or channels of control signals are derived from the forehead signals by the Cyberlink Interface. The lowest frequency channel is called the ElectroOculoGraphic or EOG signal. This is a frequency region of the forehead bio-potential that is responsive primarily to eye movements. The EOG signal is typically used to detect left and right eye motion. This signal can be mapped to left and right cursor motion or on/off switch control.


    http://www.brainfingers.com/technical.htm [brainfingers.com]

    Myabe the author of the book can work on an offspin of this product.
  • The obvious alternative to a fixed category system is arbitrary search criteria.

    "Computer, give me the document I was working on last week about Genghis Khan."
    "Computer? Didn't I once write something that compared Bill Gates to a hibernating wombat?"

    ... and so on. The computer can retrieve your document based on any criteria you like, not just the one (or, counting symlinks, the several) you happened to file it under.

  • These concepts are all hold-overs from an era where the people designing software were the only users of the software. If I'm a programmer, of course I'm going to design my shell so I can write shell scripts. Of course I'm going to give myself the ability to create complicated hierarchies -- that's how I think!


    I discussed this issue at some length in a class I took a year or two ago on user interface design. We were taught the value of interfaces built around the idea of discovery of features rather than remembering commands. However, I pointed out an important fact of life in software. The value of software is in large part a result of what we can automate. Any task that requires that I click, even once, each time I perform it because some portion of it can't be scripted sets an upper limit on the productivity I can achieve.

    I don't mean in any way to criticize the usefulness of a GUI. GUIs are excellent for first time or infrequent tasks. They are also good for tasks where there are a number of parameters which will be different each time the task is performed. The common factor in each of these cases is that a significant amount of user thought is dedicated to the task in each case. Making the interface obvious and eliminating the need for the user to think about the interface is useful.

    Now take the other end of the spectrum. A good example is a nightly backup. My thoughts about the task once it is set up should be minimal. I want it foolproof and quick to execute. I want to put in the tape and run a single, simple command (a script without parameters or a single button click). Better still, I want to change the tape each morning and have it run automatically overnight.

    The bottom line on this comes down to the whole reason that users (the "my grandmother" model) don't write applications as a rule. To completely automate something requires an analysis of all of the different execution paths and either automating them or bailing out with decent messages so that you can reconstruct what happened.

    What I am talking about is the difference between a user interface and an API. To the typical user, APIs are not important. Most people do not automate any portion of the production of the annual holiday letter to friends and family, and if they do, they want to do it using a mail merge feature with a user interface. Users want applications and are concerned with the interface. Programmers want to create applications, often custom one-offs to save ourselves time and want a good API.

    Much of the tension in discussions of user interfaces arises when someone doesn't understand or forgets this distinction, or decides that one is unimportant. How many Slashdot users have grumbled about losing a Unix (substitute your favorite flavor) workstation in favor of a Windows box because somebody made a blanket statement that users' productivity is better under Windows. Hey, it's no secret. For users, people who never code, never even script, ubiquitous GUIs improve productivity. Is my mother going to remember commands for a CLI? Only under duress. But anything that comes without an API hampers programmers in exactly what we do. I'm not interested in pointing and clicking to produce a bug report for my boss each week. I want to write a script that queries the database, generates and formats the report and e-mails it to all the interested parties. If I can get it to query our source control to find out the state of the tracks we are using for the fixes to those bugs, so much the better. I want to embed my knowledge of how to solve that problem into a script and not think about it any more.

    The real question is who's productivity you are measuring and on what kind of tasks. GUIs bring up the low end and the average users' productivity. Lest we ever forget, that is a good thing. And they improve productivity on large numbers of infrequently and irregularly executed tasks. That too is a good thing. But they are not the most efficient means of doing everything.

    If clicking widgets were the most efficient interface around, why would anyone have keyboards anymore? As I finish this comment, the thought has occured to me that my mental state is different when I am typing the text than when I am dealing with the other issues involved in submitting it. As I type the words, I am thinking only of my words. As a touch-typist, the interface of the keyboard has disappeared from my conscious awareness through years of practice. I have never achieved that state of direct interaction through a pure GUI.
  • With the exception of the universal undo, it sounds a lot like the interface for the Palm OS. No filesystem, the same widgets do the same thing, keep it extremely simple. In short, it's simply obvious what to do as soon as you turn the thing on the first time.
  • the most inhumane interface ever designed, complete with MDI 'window within a window' and toolbars that so aggressively attempt to dock with other stuff that it is impossible to get any work done. I wouldn't trust an ex-microsoft employee (save Tog, who criticizes them frequently) as far as I could throw them when it comes to any interface design issue. And yes, this does go for M$ programmers who now work at Ximian.
  • M$ Windows is one of the worst things that could have ever happened to GUI design. Microsoft has allowed so much DOS to pollute the Windows UI that what you have is a complete breakdown of the whole "user is in control" file/folder system the mac had. I think many geeks who say "we need to get rid of files and folders. They're too hard for normal people to understand" fail to take this into account.

    On the mac, the user could drag a folder with an application to their desktop. We're not talking shortcut, we're talking actual program. If they needed to reinstall the OS, they could drag a folder with an application inside it to a zip disk, re-format the main HD (and give it a name more intelligible than a drive letter), reinstall OS, and then drag the folder with the program from the zip disk to wherever the hell they wanted to. Congratulations, program is now installed. To delete program, drag folder with app to trash and empty. There were no special names that were appended to the regular name of the file (i.e. no filename extensions). The file was simply whatever the user wanted to name it--they could tell what type of file it is by the LATFI method (Look At The Fine Icon). To install a driver (aka extension), drag extension file to extensions folder. To deinstall, drag to trash and empty. And above all, all files, including system files, had "Plain Damn English Names"(tm). Files and folders were easy to understand simply because there wasn't anything complex that one couldn't understand. The macintosh was the most perfect concrete abstraction anyone has ever designed.

    Unfortunately, the redmond morons were extremely unwilling, for technical and well as religious reasons, to make sure that good ole CLI DOS didn't contaminate windows. They didn't design the GUI from the ground up as if the command line never existed (like Apple did). They simply made DOS a little bit easier to understand. And they took a simple, concrete abstraction like the Macintosh and made it abstractly concrete. On windows, you have a desktop, technically, but often when you drag stuff to it, you're asked if you want to create a "shortcut"--it tries to discourage you from putting the real thing there. So the desktop as a container breaks consistency with the folder as a container-type object. "You can put anything in a folder, but the desktop? No that's different." Then there's the matter of "My Computer". Your partitions do not sit directly on the desktop. You have to go through this strange layer called "My Computer" that isn't quite a folder and isn't quite a file. Once inside the "My Computer" limbo, you have the drives/partitions themselves, which cannot be given any name you want. You can sort of give an alias, but you must always see c: and a:. Then you've got "Special" folders with "Special" abilities that break consistency with the way that normal user-created folders act. There's "My Documents", which has the weird property of being the first place the file dialog warps to. Then there's "Control panels" and "Dial-up networking" and "Printers" folders, which exist outside any known location on your drives. These folders really don't like having stuff moved in and out of them like the regular folders do, and in that way, they too break consistency. To add something to a folder, the user simply drags it to the folder and drops it. But the user can't (with most Windows software) drag a folder with an application inside from a CD-ROM to wherever the hell they want to put it. They have to go to "Add new software", which many times will put the program into the specially designated "Program Files" folder--yet another folder with strange and unusual properties. And finally, there's system files. Ordinary files and folders the user names can be up to 255 letters of Plain Damn English. But files such as the ones in the Windows folder and those distributed with most 3rd party software--those are just plain 8 letter gobbledygook. With all this needless complexity that's in windows, with all these rules to exceptions to exceptions to rules, is it really any wonder that M$ users face such a tough time navigating through the file/folder system?

    Now maybe the concept of the file/folder is a little bit outdated. Jef Raskin certainly seems to think so. I kind of agree with him. But the fact of the matter is that any flaws that the mac desktop metaphor had was made 100 times worse by Microsoft's incompetance at designing user interfaces.
  • by Fjord (99230) on Thursday May 17, 2001 @07:26AM (#216548) Homepage Journal
    Is it "human interface" or "humane interface"? They mean very different things.
  • And it's the problem because it's usually added on, not designed in.

    The original Mac had a reasonable, but limited, approach to system administration:

    • Applications can be found by the system whereever they are.
    • Any persistent state an application has goes in a document in the "Preferences" folder. Deleting an item in that folder returns the preferences to the default.
    • Intercommunication between applications is limited to the Clipboard.

    This was limited, but explainable. No "Registry" or "System environment variable" crap.

    We need a theoretical basis for system admnistration. And the Linux crowd will never get it right.

  • If anyone on slashdot hasn't read this, they probably should:

    In the beginning was the command line:
    http://www-classic.be.com/users/cryptonomicon/begi nning_print.html [be.com]

    It's an interesting look at computers and their users. It goes along with what you say, in general, and elaborates on your statements even more.


  • So we alnaturally just plain suck.

    Man that sucks
  • by timbu2 (128121) on Thursday May 17, 2001 @07:26AM (#216563) Homepage Journal

    I liked the book a lot because it focused a lot on making the human machine interface more efficient. I pretty much hate gui's that force you to jump through umpteen dialogs to configure something that should take 5 or 6 key strokes and Raskin seems to understand this.

    I even had my wife a non-programmer read the section on modalism, which has greatly enhanced her ability to turn on the TV and actually play the TiVo or DVD successfully without calling me.

    After reading the book I am even more rabid about my adoration of ViM [vim.org]

    One problem ... in the book he talks a lot about products like this Canon word processor that didn't seem like commercial successes. They may have been "humane" or even efficient, but no one bought them. What good is that?

  • All modes generate user errors

    I don't know how extreme he is on this point, but my sense is he takes it too far. Modes in some form are integral to our interactions with machines, people, and the world.

    If we forbid modes, a TV remote control must have an on button and an off button. Nobody wants this, because it's much harder to remember which button is which, than it is to observe that the TV is off and deduce that the button will thus turn it on.

    The system should always react in the same way to a command.

    Very few things in the real world always react the same way to the same command. You don't behave the same way in the library as you would at a friend's home. Your car responds much better to the gas pedal when it's in drive than in neutral. You read people's moods before deciding how to communicate with them.

    People can observe and remember a certain amount of context, and tune their behaviors accordingly. It's frequently convenient to enter a mode, because you can forget about everything not appropriate to that mode. This is just as applicable to computers as to the real world--when we perform different activities, we enter different mindsets and adopt different expectations. Computer interfaces should take advantage of this human capability (with appropriate restraint, of course).

  • by MasterOfMuppets (144808) on Thursday May 17, 2001 @07:26AM (#216567) Homepage
    ..of course "Unix" backwards is "Xinu", which is awfully close to "Xenu" [xenu.net], super being. Ask your local Scientologist if you're not sure.
    The Master Of Muppets,
  • IMHO MacOS X is the best of both worlds -- bulletproof and powerful underpinnings covered with a cutesy interface. That way it can be as simple or as powerful as you need it.

    (And anyone who ridicules MacOS classic as hopelessly dumbed down should look into AppleScript -- scriptable applications are pretty common, and you can control them like marionettes. The only reason that Unix doesn't have that kind of fine-grained control is that Unix shell commands are (or were) pretty fine-grained themselves, but that's why we have sockets and SysV IPC...)

    /Brian
  • by thewrongtrousers (160539) on Thursday May 17, 2001 @07:34AM (#216569)
    Slashdot has discussed Raskin before with disappointing results.
    Instead of trying to understand some of the concepts that may, at first, sound strange - his concepts are riduculed and flamed out of hand.
    Flamethrowers grab a little bit of text, twist it - without trying to really comprehend, and proclaim Raskin an idiot. A shame because Raskin has some *very* good ideas that he generally supports quite well.
    The only positive from what's going to be a large amount of flames is that Raskin should take some solace that all visionaries are subject to much abuse. Most people are unwilling to give up or reconsider what they think they "know" as truths, often to their own detriment.
  • "On the other hand most of us run into trouble when trying to study calculus at the same time we're hitting on a sexy lady (make that a handsome man for those 5% of ladies here at Slashdot, or some sort of hermaphrodite freak for those 11% with doubts about their sex)."

    Interestingly, this strange, awkward, stale moment pulled me out of the article long enough to cause me to forget what point the author was trying to make.

    Let's rewrite a later sentence:

    Of course, this was no big deal, but I had to take my mind off
    reading the sentence to figure out what had happened, just for a second, but long enough to derail my thoughts, and that derailing should never happen.
    Maybe these rules for interface design apply equally well to good writing for a broad audience, or to attempts at "humor" in general. ;)



  • by ortholattice (175065) on Thursday May 17, 2001 @08:05AM (#216574)
    ...Fitts' law (the time it takes for you to hit a GUI button is a function of the distance from the cursor and the size of the button)...

    This is the first I heard that this law had a name, but one thing I've wondered about is why most GUI editors have scroll bars on the right and not the left, where most work is done. E.g. for cut-and-paste you have to go all the way to the left to select and all the way to the right to move.

  • You should never have to think about what way to do something, there should always be one, and only one, obvious way. Offering multiple ways of doing the same thing makes the user think about the interface instead of the actual work that should be done. At the very least this slows down the process of making the interface habitual.

    I wholeheartedly disagree. I like to have multiple ways to do things. I try them all and figure out the one that suits me best, then make it a habit.

    Someone else's idea of "the most natural way to do x" often times isn't mine. I guess that's why I always set custom keys in games, use emacs, and think PERL is fun to hack around with.

  • I've used a hammer for years, a screwdriver, too.

    Guess I'm an idiot for not taking time out of my busy schedule to explore all the things my hammer and screwdriver can do.

    And I only drive my car at the speed limit and on the road. I'm an idiot because I refuse to learn how well it does off road or at 150mph.

    A computer is a TOOL. Tools are usually considered to be ease work or accomplish something. The point should not be to use the tool but to do something productive with the tool.

    I don't care how my hammer works. I don't spend hours studying the forces to optimize the blows to drive the nail in a tenth of a second faster.

    So, pray tell, why should my laywer budy understand what a symlink is and why it's better than a shortcut in Windows when all he wants to do is write a cease and desist letter? Sure he's refusing to learn because he deems it as useless knowledge. He doesn't care the particularities of why his car pings. He just wants his mechanic to fix it.

    Please, don't be arrogant about computers and overstate their importance in the world. They're just a tool. Don't you be one, too.
  • The general idea is good (especially global undo/redo! Imagine being able to undo the last time you stupidly walked right in front of a rocket!).. I see some problems with it though..
    firstly, not all applications can use the same interface. Period. Can you imagine trying to play Quake using a MS word (or emacs.. or vi.. or pretty much anything not remotely related to 3d navigation) interface? Or for that matter, trying to type out that document the boss wants in 20 minutes if you have to use a mouse/keyboard combination to pick letters from your latest Quake/text editor mix?.. it just cant be done.. its like trying to create a car using the "interface" of a pair of rollerskates.
    secondly.. modes.. I agree that modes can be a pain in the @$$ (especially when Word decides to change modes for you when you dont want it to).. but really.. how are you going to get around this?.. if you want to allow bold/italic/underscored/different font/etc text, you need some way of specifying what you want. You obviously cant have an enormous keyboard with a set of keys for bold, another set for italic, etc (not to mention that that would likely be more confusing and distracting than accidentally bolding some text).. that leaves basically two options.. dont allow bold (unlikely), or have the text editor do it all for you.. and we've all seen that THAT doesnt work very well these days.. (maybe when they get direct-brain-control working and we can just think it the way we want:)
    thirdly is legal.. MS has already attempted to meld things together, and found themselves slapped with a nice big antitrust lawsuit (not that I dont agree with that, but the point is there).. so we know this cant be done within one company (at least, not one that has any reasonable sized portion of the market share, and smaller companies dont have the funds for the R&D needed I'd guess).. open source/free software might be able to have a go at it here, but I doubt anyone would bother on any sort of large scale :P..
    oh.. and about that bit about removing the file structures and stuff from the user view.. uhmm.. no.. a user might not care if the extension is ".rtf" or ".doc", but they DO care where their files are put.. having a billion files in the same place is NOT cool.. thats why we invented directories and libraries and whatnot in the first place! organization!
  • by sv0f (197289) on Thursday May 17, 2001 @09:03AM (#216586)
    More precisely (although still not perfectly), the time it takes to move a distance D to a target of size S is proportional to the logarithm of D/S. Academic HCI types make a big deal about Fitt's law and other empirical regularities of thinking.

    Another is Hick's law, which (roughly) states that the time to choose between a set of N alternatives (e.g., in a menu) is proportional to the information-theoretic entropy of the decision. If the N choice are equally probable, the entropy is the logarithm of N.

    Academic HCI and the application of Fitt's and Hick's Laws to this domains begin with Card, Moran, and Newell's 1983 book "The Psychology of Human-Computer Interaction". I recommend chapter 2 for those particularly interested in the psychological basis of their recommendations. This is the book that introduced the GOMS modeling technique that Raskin covers as well.

    Personally, I've never found much insight in this line of thinking about HCI. Knowing this stuff does not make you a good designer of computer interfaces. Artistic flair and empathy for users plays a crucial role that is not addressed in this tradition, and perhaps not addressable at all within mechanistic approaches to cognition. All of this is IMHO, of course.

    Card and Moran were at Xerox PARC in the late 1970s and early 1980s, I believe, which is where Raskin was before Apple hired him to develop the machine that would be taken over by Jobs and become the Macintosh.
  • "The more conventional wisdom includes noting that the interface should be visible. "

    I wonder how this work when you are attempting to make GUI's accesible. Its very hard to provide this sort of utility for visually impared people, as speech synthesis only comes one bit at a time. You can not listen in parallel.

    "That is one of the downsides of the otherwise efficient command line interface"

    Unless you hit tab of course.

    Phil

  • Because the only friggin' intuitive interface is the nipple.

    BTW, who said that originally?

  • Raskin has some curmudgeon in him. Just read this page on what Raskin thinks we should do with the word "intuitive [asktog.com]".

    Unlike Tog, who seems to think that anything the Mac interface does is The Right Way To Do It, Raskin appears to receive some influence from actually experimentation.

    On the other hand, going hard-over against modal interfaces seems a little odd. As far as I know, only one study has ever been done on this sort of thing: "A Comparative Study of Moded and Modeless Text Editing by Experienced Editor Users", by Poller, M.F., Garter, S.K., appearing in Proceedings of CHI '83, pp 166-170.

    One interesting conclusion from that study:

    Moded errors do not seem to be a problem for experienced vi users. The vi group made few moded errors, and those few were rapidly corrected. Futhermore, modeless editing may not totally avoid moded type errors, since the emacs group made errors that were similar in nature to the vi moded errors.
  • I haven't read the book, of course, but interface designers almost always have to take advantage of modality. Even in the "simple" case of driving an automobile, the accelerator has a "modal" effect on the behavior of the vehicle, depending on which gear is selected. In computer interface design, the most fundamentally modal input device is the keyboard. In a windowing system, it has dramatically different effects depending on which window is selected.

    The important item to keep in mind is the number of modalities and the complexity of their interactions. Keeping modalities obvious and intuitive will keep most users out of trouble. (e.g. a tiny little "Ovwrt" isn't always an obvious indicator that you've left Insert mode). Then all you have to do is figure out what's "intuitive" for your users :-)

  • It's good to see a frwsh perspective on this issue. I'm tired of reading stodgy texts from 30 years ago about how it would be a great advance to use a 'mouse'.

    Over the past 20 years though, interface design has been pretty much dictated by OS vendors such as Apple and Microsoft. This trend is going full circle with the proliferation of applications to which various 'skins' can be applied, producing an entirely new look. This trend has appeared, I believe as a direct result of the advent of dynamic layout methodologies similar to HTML. Let's all remember it wasn't always so.

    Apple was the first to implement draconian interface specifications for 3rd party applications (since 1984). Microsoft has generally left it more to the compiler vendors, although all design seemed to sprout from Microsoft anyway...

    It's thrilled to see new thought in this vary important area of the field.

    --CTH

    --
  • ...is "The Inmates are Running the Asylum" by Alan Cooper, ISBN 0672316498. You can probably find it cheap, too... got my copy at Borders for about $4.00, hardcover even.
  • ...who said that "Unix is backards" [slashdot.org] (or is at least alleged to have said that. I think it would not be a surprise that he takes issue with a lot of Unix truisms...

  • I cannot help thinking that the author is using the same logic that brought us the language Esperanto. (One way to do it, complete consistency, very few rules, etc.) I think the problem is that humans themselves are not really that way.

    I suppose it is a criticism of the English language, just as it is of, say, Perl, that in different contexts the same word means different things. Many might argue that this is not a defect and that people intuitively use English despite all its modes because they are so well practiced in it.

    This is not to say that computers should not be designed to be easier, merely that there is perhaps some utility in having different programs function differently. And even having one program respond differently to the same input at different times. For instance vi and emacs (and other programs for editing code) respond differently to a carriage return depending on the situation, by tabbing to a different point in the following line. Though I could disable this feature, I do not since it is so useful.
  • You know, I hate using paper and pencil. It's so modal. If I want to erase something, I have to either turn my pencil upside down, or drop the pencil and grab an eraser. Oh, and painting is even worse! I constantly have to change modes by changing instruments. Why can't my palette knife do everything?

    The lesson here is don't give users everything they think they want. In my experience, users don't even really know what they want. For applications, etc., focus on use cases: how people want to use the system. Design the interface to make those cases easy to do. Basically, goal oriented interfaces (which is what that VB guy who wrote Inmates Running the Asylum is a big fan of). Unfortunately, Microsoft took his ideas and created "wizards" which are the most useless goal-oriented interfaces I've ever had to use.

    Even for hardware, like VCR's, focusing on how users want to use the system would make more effecient interfaces. Users really just want to say, "Record 'Manimal' at 8 o'clock on NBC" and have their VCR know what the heck they are talking about. They don't want to go into their TV Guide and type in numbers to a VCR+. Another option is the way I think TiVO does it where you can have an on-line list of programs, you select the program you want and indicate you want to record it. It's better than working through an on-screen interface to try to tell the VCR what time to start and stop recording.

    Another example of goal-oriented interface is the Wacom Intuos digitizing tablets. You can get three extra pens, each withs it own unique digital id, and an airbrush pen (which looks and feels like an airbrush). Each pen can then be assigned a unique function in Photoshop (or other app). This way, the mode becomes obvious as picking up a different pen or picking up the airbrush tool. The tablet knows you've chosen the airbrush tool, so it automatically changes to airbrushing. You take the first pen, and it knows you want to draw. You change your mind, turn the pen around to the "eraser" on the end, and voila you're erasing. This is an intuitive mode change that I think is far more useful than trying to make a modeless interface.

    The point here is the interfaces must be engineered towards what people want to do with it, and that you behave in a consistent way. For example, if you have modes, then make sure that there is a standard way in each mode to get out of it (like the ESCAPE key). Users can learn arbitrary interfaces, as long as they are consistent and geared to helping them do what they want to do.

  • Perhaps it's because there is no one single right way to make an interface, but there are many wrong ways. Software producers continue to make the same mistakes about what they think is user-friendly (yes, including GNOME and KDE by following the WIMP example), but Raskin shows that many of the usual assumptions are wrong (pretty much everything we currently understand about user interfaces, e.g. "icons are user friendly").

    I don't think right and wrong are adequate categories here. In my opinion, the main problems in interface design are: there are not strict, absolute rules, and there are so many rules and trade-offs. As a result, most interfaces are a compromise between all those rules; they have to be. The interface designer's job is a multi-dimensional optimization problem.

    Take colors for example. Designing a color scheme is a really hard task. One has to watch kind of side-effects, like brightness and contrast. One has to consider the possibility of color-blind users. Some combinations of colors strongly suggest a metaphor to the user, and should not be used if this is not intended; this applies namely to red-yellow-green which is likely to be associated with traffic lights -- in the western world. So cultural differences have to be taken into consideration, too.

    Software developers tend to simplify and generalize the rules. That's what they are used to from programming. And they are seeking for recipes, for patterns. But there are no recipes and not many patterns for designing the user interfaces of sophisticated applications. Interface design needs skill, experience, the ability to watch your own creation with another one's eyes, and a will of iron to take every single problem of one's trial users seriously. And, of course, managers who do not sacrifice usability to deadlines and project plans.

  • If we forbid modes, a TV remote control must have an on button and an off button.

    On page 180 of Raskin's book you will find a case study showing this is not necessarily true. Raskin's example is about saving workplace state to a floppy disk, and he had the same considerations about modes. After describing a one-button solution, he concludes that the modality in this situation depends on the user's model.

    Applied to the remote control: If one thinks of an on function and an off function behind the same button, there are modes. But if we watch it as, for example, a general "change actrivity state" function (in Raskin's book it's a "do the right thing with the disk" function), there are no modes.

  • An interesting proposal I remember from David Gelertner, I can't remember the book, was to store data in "tuple spaces". Every time you create data the computer tags it with everything it knows about the data; the contents, the time you created it, what other data you were using, and so on.

    You request data through abstract queries like "what was that cartoon I was looking at one evening about Windows 95?", or "get me everything about the Wilson project".

    Unfortunately it would require strong AI to achieve.

  • Yep, and in the box it says: "Title: The Human Interface".
  • I might buy the book just to see what he means by "no file hierarchies". I've read about this book before, and that was what stuck out the last time as well.

    How would one organize files? Ok, say you don't have "files" but just documents. Is the device only good for editing one at a time? Or is he suggesting a replacement by a more random-access structure, so that you need to give it keywords to find the file you're looking for?

    Categories and subcategories map well to the real world. If I have only a few files (documents, whatever) they can be in one place. But if I have many, then I'll want them to be organized somehow, and hierarchically makes a lot of sense for that.

    My fear is that he's advocating a "sufficiently advanced technology" interface that somehow magically finds the file that you want based on your limited, possibly ambiguous query. If anyone has read the book and knows more specifically what he's advocating, please reply to this.

    Tim

  • Some random thoughts in the spirit of this discussion: We often judge user interfaces by their feature sets. More features mean more power, and are better, right? I think not. I actually want my software to have a minimal feature set which is maximally robust.

    The idea behind building feature-rich software runs something like this: figure out what a program will generally be used for, and come up with a set of tasks -- use cases -- which are representative of what the majority of users will want to do with the software. Design the software around the use cases, using the ease and efficiency of each of the use cases as a yardstick for the program's success. If users demand more features, the market for the software expands, or the company simply has more resources to build up the software, then add more use cases and repeat.

    This works reasonably well, but it has a serious flaw: as the amount of functionality and thus the number of use cases go to infinity, the number of features goes to infinity. Think of M$ Word -- at a certain point, the wizard that will make multi-column address labels is useless to me, because there are so many damned wizards I don't know where to find it.

    What software really should try to do is maximize robustness while minimizing the feature set that achieves that functionality. If you want your software to achieve a wider base of functionality, you have to make your interface's features more generalized, and more expressive. As the amount of functionality goes to infinity, the feature set stays bounded, and just approaches some sort of Turing-complete programming language.

    So when designing software, don't ask "What features should we add?" Ask "What functionality can we add, and what features can we generalize in the hope of removing others?"

    I realize that this is all very idealistic, but it seems like a guiding principle that could keep software bloat in check.
  • One of the reasons people love UN*X-style OSes runs along with what both you and I are talking about: at the command line, you have a set of (mostly) simple and general tools at your disposal. Through basic building blocks -- pipe, redirect, grep, sed, awk, and so forth -- there's almost always a way to do tie things together in the way that you need to. This is one of the advantage of a small set of generalized features: it's really good at doing things its designed didn't anticipate.

    There is certainly a glut of command-line tools, though. The problem with the 1000 UN*X apps is that
    • they don't all have one feature, because they all have to have input and output handling options so they can play with each other, and
    • there's a lot of feature overlap, because roles overlap and it's often necessary to make a whole new command in order to change just one feature.

    But I suppose this happy mess is really in the nature of a sort of the Darwinian free-for-all of UN*X.

    I'd really like to see a shell built on a single powerful set of list and pure function operations that would neatly generalize pipes, redirects, backquotes and lots of command-specific features into a single, more general scheme. It's silly that I have to know about the output format of ps to kill all processes with some specific thing in the command line:

    kill `ps -ef | grep nuj | cut -c9-15`

    I should really be able to say something like:

    kill (ps ? (#.cmd==nuj)).pid

    ...where ps just outputs some list of named hashes, and I don't have to play ASCII games or know anything about the format of its output.

    -END OF PONTIFICATION-
  • by janpod66 (323734) on Thursday May 17, 2001 @10:38AM (#216621)
    Rather than revolutionary, I think this book describes pretty traditional ideas in UI design and proposes somewhat superficial applications of results from psychology and congnitive science. For example, while attention and automation matter, the implication that interfaces shouldn't be modal is unjustified: lots of things in the real world are modal, and people deal just fine with them. In fact, while modality may have some cost, it also has benefits, and whether it is part of a good UI depends on the tradeoffs. Similarly, Fitts's law and other experimental results are often used in UI design far beyond their actual significance.

    The really controversial idea, though, is to abandon applications altogether. There would be only the system, with its commands, and the users' content. Even file-hierarchies should vanish along with the applications, according to Raskin.

    I don't see anything controversial about that. There were several systems that mostly behaved that way before the Macintosh. The idea was to eventually move towards a system with persistent object store and direct manipulation of objects, eliminating the need for applications and allowing easy reuse among applications. Generally, the way that was implemented at the time (early 1980's) was via "workspaces" in which all data and objects lived, together with implementations in safe languages that allowed code to co-exist in a single address space.

    What killed this approach was, in fact, the Macintosh, later copied by Microsoft. Using Pascal and later C++ as its language made it very difficult for "applications" to safely coexist or to share data in memory. The Macintosh and Windows merely put a pretty face on a DOS-like system of files and applications.

    I'd still recommend reading Raskin's book. It does have historical significance, and it gives you a good idea of what mainstream thinking in UI design is. Raskin himself, of course, has contributed significantly to the state of the art and the body of knowledge he describes. There are some ideas in there that are probably not so widely known and that may be helpful.

    But don't turn off your brain while reading it and don't take it as ghospel truth. UI design is not a science, and many of the connections to experimental results are tenuous at best. And a lot more goes into making a UI successful than how quickly a user can hit a button. If anybody really knew how to design a UI that is significantly better in practice than what we have, it would be obvious from their development of highly distinctive, novel systems and applications.

  • I have not actually read the book, but I will attempt to get around to it as soon as I can.

    Computer interfaces really haven't changed in at least the last ten years. They have gotten prettier, maybe faster, but there has been no fundamental change in all that time. In fact, as far as I can see they have only changed twice in computing history (Let me know if I am wrong, lord knows I am not omniscient). The changed from simple printouts/registers to character based interfaces then from character based interfaces to our present GUI interfaces.

    I can't see this lack of change as a good thing. In an industry where rapid change is standard it surprises me that interfaces have remained stagnant. So if this book can foster some original thinking and perhaps some newer more efficient designs for interfaces, we should probably take it seriously, not matter how controversial the content.

    As always just my 2c
  • ..until you read the book.
    I personally just purchased this little ditty at Border's not four nights ago, and flew through it. What Raskin does, and why, in the end, you can not help but agree with him, is to walk up to each concept he presents clearly, and explain that why, yes, UI designers often think this way, I think they are forgetting x and y, and thinking of z. Very similar to Aquinus's theolgica in voice.
    What he points out, quite poignantly, is that engineers design the interfaces, not people. We engineers are a different breed, and, when creating our interactions, we tend more towards Neal Stephenson's essay "In The Beginning..", where every intimate interaction with machine, as well as each step in the process, should be visible to us.
    However, as Raskin states, this is rarely transfered into an interface that makes it's current state obvious to it's user, and instead, engineers often assume the user a vast idiot.
    Unlike Tog, who claims to be the definitive source of interfaces, Raskin admits that his ideas are difficult to actually place in an interface, and instead, seems to prefer the book be used as a guideline for meeting halfway.
    Raskin also explains in very simple terms, just how the human mind is thought to work, and that if these fundamental are true, then why would working with a computer feel like x, when, in review, it should be z.
    For instance, perhaps the most harped upon item in the work is the idea of modes. What Raskin believes is that the human brain takes 10 seconds to switch gears between two tasks, and that modes actually slow a user down. He seems to believe there should only be one mode, 'edit', giving the user complete control of all aspects of the document at any time. Most UI people believe modes save users from themselves, allowing certain changes to a document only when in mode x, and you have to click 'Ok' to enter. However, Raskin solves this argument by simply calling for the Universal Undo/Redo button ( as found on the Canon Cat), the inability to delete anything permanently, and automatic saving of documents.
    One of the more intricate ideas, and one everyone seems to wonder about is the Raskin's idea to remove file-heirarchies.Here's how Raskin believes it should work; One, within a machine there are only documents. The system itself is invisible to the user. You don't launch applications, or move around system extensions, you just work. With this removed, you then look upon hard drives as actually an infinitely long scroll, where documents simple begin to occupy blocks of space, and can grow within their space indefinetly. While Raskin never explain exactly how this will work with hard drives that become radically more crammed, I don't think it was ever his intention to explain exactly how to do anything.
    What many reviewers miss is Raskin's assertion that his ideas, the sum of this book, are just that, ideas, and guidelines. Bascially the book is a statement of 'In an Ideal world, this world happen.' However, he does provide several methods for testing current interfaces, as well as examing ways to improve them now.
    I recommend it as shelf-material for anyone who works day-to-day in UI. At it's best, the book is a profound guide for the ideal that computers, or any device for that matter, should be less-complicated, more thought-out structures.

You are an insult to my intelligence! I demand that you log off immediately.

Working...