The Humane Interface 169
The Human Interface: New Directions for Interface Design | |
author | Jef Raskin |
pages | 233 |
publisher | Addison-Wesley |
rating | 9 |
reviewer | Torulf Jernström |
ISBN | 0-201-37937-6 |
summary | A thought-provoking book on how to design user interfaces. |
Introduction
The Humane Interface: New Directions for Interface Design, by Jef Raskin, the creator of the Macintosh project, provides an interesting read chock full of controversial ideas. As the book's full title suggests, Raskin focuses mainly on how things ideally should be made, rather than offering advice and recepies that can be immediately applied to problems in today's systems.
Don't Think!
The approach taken by Raskin stems from his definition of a humane interface: "An interface is humane if it is responsive to human needs and considerate of human frailties." In practice, this means that the interface he suggests is based on ergonomics and cognetics (psychology).
Basically, the idea is that we can do only one thing well at a time consciously. Most of us can walk, chew bubblegum, hold our bladder and speak (semi-intelligently) with a friend, all at the same time. This is because the only thing we are consciously doing (the only thing we are concentrating on) is the semi-intelligent babble. On the other hand most of us run into trouble when trying to study calculus at the same time we're hitting on a sexy lady (make that a handsome man for those 5% of ladies here at Slashdot, or some sort of hermaphrodite freak for those 11% with doubts about their sex).
The point with this is that the one thing we're consciously working with should, with as few interruptions as possible, be the content we are producing with the computer, not the computer itself. That is, all interaction with the system should, after the initial learning phase, become habitual or automated, just like we can walk or drive a car without ever consciously thinking about it. This way we could maximize productivity and concentrate on the content of our work.
There's Only One Way to Do it
For commands to become habitual as quickly as possible some interface-guidelines are given. First of all, all modes (differing types of responses based on context) should be eliminated. The system should always react in the same way to a command. All modes generate user errors when the user isn't paying attention to what mode the system is currently in (and the user should not have to pay attention to the systems current mode, the user should only have to pay attention to the content he or she is currently producing). An example of mode error happened to me while I was writing this review, just a few lines up: I unintentionally left overwrite on when I thought I was in insert-mode and thus overwrote a word by mistake. Of course, this was no big deal, but I had to take my mind off formulating the sentence to figure out what had happened, just for a second, but long enough to derail my thoughts, and that derailing should never happen.
Another way to speed the transition to habitual use is monotony. You should never have to think about what way to do something, there should always be one, and only one, obvious way. Offering multiple ways of doing the same thing makes the user think about the interface instead of the actual work that should be done. At the very least this slows down the process of making the interface habitual.
Unorthodox Suggestions
There are of course a lot of other suggestions in the book, some expected, some very surprising and unorthodox. The more conventional wisdom includes noting that the interface should be visible. That is one of the downsides of the otherwise efficient command line interface, i.e. you cannot see the commands at your disposal by just looking at the interface. A method called GOMS, for evaluating keystroke efficiency, and Fitts' law (the time it takes for you to hit a GUI button is a function of the distance from the cursor and the size of the button) are also among the less surprising ideas in the book.
The more unorthodox suggestions include Raskin's proposal for a universal undo/redo function, not just in the different applications but in the system itself. The same gesture would always undo, no matter what application or setting you last touched.
The really controversial idea, though, is to abandon applications altogether. There would be only the system, with its commands, and the users' content. Even file-hierarchies should vanish along with the applications, according to Raskin.
Conclusions
The Humane Interface focuses on the principle of user interfaces and the theory behind them. The ideas presented in the book generally make sense, after considering the background and argument for them presented by Raskin; even if some seem very drastic when first encountered (I can hear Slashdotters firing up their flamethrowers after reading the end of last paragraph). As mentioned before, the book does not provide many ready to use recipes. It does provide a good insight into the background of user interfaces, which the reader can apply to the project at hand.
Some related ideas were dicussed about a year ago on slashdot. The Anti-Mac paper discussed then, came to pretty much the opposite conclusions from the ones that Raskin presents (Raskin makes a case against beginner/expert interface levels). After reading both sides of the story, I'm inclined to believe more in Raskin's reasoning.
The only Open Source or Free Software the book mentions is Emacs, in a discussion about string searches. (The incremental model in Emacs is preferable to systems where the search does not begin until you click the search button.) I do, however, believe that the alternative interface models could be an opportunity to the open source community and that Raskin's ideas perhaps are more likely to be implemented and tested in such an environment, compared to the possibility that Microsoft would greatly simplify the thousands of commands that MS Office consists of. I therefore warmly recommend this book to anyone doing software development and I would love to see some of the ideas used in KDE or GNOME.
You can purchase this book at Fatbrain.
Re:This is the ... READ HIS STUFF! (Score:1)
This happens with the Find command on the Mac. When you call it up it takes a few seconds to load. But while you are calling it up you can type into your keyboard and your keystrokes are captured and seamlessly placed into the Find window's text box. This is one of the most convenient things I know of - not having to wait to state your query. Of course, if it didn't have to draw the damn Sherlock GUI this wouldn't be necessary... but it's been around since all application calling up was slow (back in system 7.5). A very nice feature in this context.....
Re:This is the ... READ HIS STUFF! (Score:2)
Here's the kind of idea that would break Gnome, or KDE away from the GUI pack (again, from JR's site):
Different Strokes for Different Folks (Score:5)
A lot of users (my mom, for example) use their computer for three things: reading email, surfing the web and word processing (Word). Now tell me why any of these three tasks require a user to think about "a file". Tell me why any of these three tasks require the user to think about "C:\My Documents\WorkStuff" (or "/home/foo/docs/workStuff", if you prefer).
These concepts are all hold-overs from an era where the people designing software were the only users of the software. If I'm a programmer, of course I'm going to design my shell so I can write shell scripts. Of course I'm going to give myself the ability to create complicated hierarchies -- that's how I think!
Now, we have a lot of users of software who are not programmers and geeks. If we care about them using technology, we need to think about the easiest thing for them. This doesn't mean we have to get rid of command lines, we just have to come up with something else for users who do not want (or need) command lines. This is not a zero-sum game. The growth and popularity of technology is a win for everyone.
To make this happen, a portion of the software community has to realize that they are designing software for people who are not programmers and who are not (gasp!) interested in technology for technology's sake. Some of us (but not all) need to get rid of the attitude exemplified by the following quote:
"Linux *is* user friendly. It's not idiot-friendly or fool-friendly!"
The majority of users are not "idiots" or "fools". Some are doctors, lawyers or artists who have chosen to concentrate in an area outside of computers. Saying they are idiots because they don't understand symbolic links is like a eye surgeon saying that a programmer is an idiot because he can't remove a cataract.
Re:Fitts' law (Score:2)
1) The control moves, but it's restricted to only one orientation.
2) The control is very narrow (in the orientation perpendicular to its movement).
Moving a vertical scrollbar for any significant distance requires moving the pointer in a nearly perfect vertical line. Try it and watch how the pointer wanders from left to right. To be usable, the scrollbar should then track in order to accomodate the pointer's inevitable horizontal movement outside of its area. Imagine trying to move a 1-pixel wide scrollbar if it didn't track! (or just try XTerm with Emulate3Buttons with a pencil eraser-type mouse for a reasonable simulation!)
Things are a little different with a menu. When you move the pointer up and down to select a menu item, the "control" (in this case, the menu highlighting) does not need to track because the menu itself is wide enough to accomodate any horizontal wander. Note however, that the the menu itself still tracks: when the mouse leaves its area, the highlighting disappears but not the menu itself. Why? It seems like this behavior is to enable nested menus. Chasing down a deeply nested menu item requires moving the pointer in a series of long horizontal lines punctuated by steps. The menu should track so that if even if you drift off of a menu, you can go back and continue from the last "step".
I believe that KDE and/or GNOME do NOT track nested menus, and this can be VERY frustrating.
Finally, when you click on a button, the button doesn't move, and neither does the pointer -- or at least it doesn't need to. Therefore no tracking. On Windows the button correctly pops back up when the pointer leaves.
BTW, this is all off the top of my head, just from messing around on Windows...
Baby, you can drive my car (Score:2)
It *does* take a while-- a few months, perhaps, to become proficient. Cars are simple because they have 4 or 5 controls, not counting simple things like lights and air conditioning. But it takes a few months to learn how to drive.
Why is this? Could it be there is more to driving than just learning the function of 4 or 5 controls? Could it be that the interaction of other drivers fouls up the simple use of 4 or 5 controls?
If a simple something with a single purpose, such as a car, takes weeks or months to learn how to handle properly, why shouldn't we expect the *same* learning curve for a multipurpose device like the computer?
I don't expect my SO to know how to rebuild a carbuerator. But at the same time, I *do* expect her to use the turn signals at the appropriate time, and know how to obey speed limits. I also don't expect her to park her car in someone's bedroom.
Similarly, I expect her to know how to find a simple file she created last year, but managed to misplace. I expect her to know how to open photos, and email said photos to friends.
One day, when computers are sufficiently intelligent, we can simply talk to them as if they were Sigfrid von Shrink. Until that day, it is a mark of intellectual laziness to refuse to take the time to learn a general tool that is so important to our lives.
Re:Just read this myself (Score:1)
As stated in the review, people can only consciously concentrate on one thing at a time. They will be concentrating on the task they wish to perform in vi, rather than on the current text colour and by extension, the mode the program is in.
His specific example was of a study he did with experienced users of a particular CAD program. This program has several selection tools, all indicated by different, distinctive mouse pointers. Those are modes.
Users invariably made the same error repeatedly, even with experience: they did something with a specialized selection tool, then moved to select an object with the regular selection tool without switching to it first, even though they were looking right at the visual mode indicator the whole time.
The users were focussing on the task of selecting an object, not on what selection tool (mode) was currently enabled. Adding your visual cue to vi won't help any; people will still get it wrong because they won't be paying attention to the visual cue.
Besides, wouldn't that mess up syntax highlighting?
--
Change is inevitable.
Re:Let us havs our computers back!!! (Score:2)
The gui monkeys have misunderstood something:
Hey language is inefficient... lets go back to scratching pictures in the dirt or hieroglyphics on the wall in order to communicate.
The command line is a language. They are stuck on the "pictures are better than words" meme. Yeah, true, they are, until you learn to read that is.
Visual stuff, GUIs, languages etc, are more *intuitive* in the same way that picture books are more intuitive to babies. It doesn't mean they are superior. This is why the best interfaces are a combination of GUI and language. It's just like the way you give children picture books while they are learning to read.
It's called "editing" (Score:1)
And as has been mentioned, the title in the summary box substitutes "Human" for "Humane".
Maybe you should borrow an idea from XP and practice pair publishing ...
Re:Fitts' law (Score:1)
I think that's windows only. I don't have a Mac at the moment but my KDE (or any other X application) doesn't do this.
That feature annoys me a lot when working on windows just like the scrollbars popping back if you leave the area to any other side.
Robert
Re:Sounds familiar (Score:1)
Jazz ZUI (Score:1)
OT: "Hermphroditic freak" was uncalled-for (Score:2)
You know, I'm willing to bet that hermaphrodites are well-aware that people think of them as strange or weird without being called "freaks" in a large public forum completely non-related to sexuality or biology issues.
Furthermore, I'm pretty sure that most gay people would not list a "hermaphroditic freak" as their distraction-of-choice.
The traditional neutral phrase is "member of the appropriate sex", or MOTAS (straight from the Jargon File). Better still, leave the weird imagery out of future book reviews.
Squeak (Score:2)
__
Re:Just read this myself (Score:1)
OTOH, it would not have been particularly expandable into any kind of general purpose computer. But hey - no one buys lots of things that are good ideas, it doesn't invalidate them.
Re:Blah blah well DO something then (Score:1)
So, what precisely is he not doing about things? In the end, with both him and Tog and others, someone's got to listen to them and follow their advice - neither have the ability or inclination to code a whole fully-featured UI themselves. At least they remain influential. I know I'm paying attention in my own projects, insofar as I can.
Re:Different Strokes for Different Folks (Score:1)
In truth, I think that learning about computers is useful only insofar as it's useful for some other pursuit. That pursuit _could_ be something else about computers, but generally it's not going to be. I really wish that we'd design cheap, rugged, simple computers for children and then proceed to spend some time from kindegarten through high school teaching them how to program and the kind of thinking that involves, basic electronics, etc.
Because then we might actually _raise the bar_ of what constitutes general knowledge, and people wouldn't mind writing tiny programs because they would let them do even more of whatever work they were really interested in. Computers have the ability to become prostheses of the human mind in the way that literacy already is. (which was much bemoaned long ago for impairing the development of memory)
A lot of that involves wrapping up boring repetitive problems into well-known solutions so that a minimum of programming knowledge is necessary, but generally it involves cultivating the attitude that these boxes are tools that any one can and ought to be able to reconfigure to suit their own needs, either alone or in collaboration.
Re:Different Strokes for Different Folks (Score:1)
Why can't I type in a CLI command and get GUI output? Or make a GUI selection of spacially proximate icons and then add to it with regular expressions from the keyboard?
Both are exceptionally useful, but neither one is as good alone as both could be together.
Re:This is the ... READ HIS STUFF! (Score:1)
His Canon Cat word processor booted instantly, in 7 seconds. That is, it _appeared_ to come up instantly, thanks to his little trick, and others. (e.g. only going into a sleep mode and not totally powering down) Psychological studies had indicated that it takes people about 8 seconds to switch brain gears and begin actually consciously interacting. By booting in 7 and buffering the inputs, it was ready to go before the users were.
Technologies such as hibernation - particularly rather granular ones, as well as just monolithically dumping a stored RAM dump of the system from disk to RAM - and sufficiently stable platforms so as to not need to worry about leaving systems up constantly could be used to make this a reality on the desktop.
We're not there yet, but we could be and he actually was, just in a limited environment.
Re:Modes and vi (Score:1)
Try reversing the functions of your car's pedals every day - you'll find that it's good to have enough consistancy to form these kinds of habits.
Re:This is the ... READ HIS STUFF! (Score:1)
A comparable example lies in the realm of monitors. CRTs used to take a couple of minutes to warm up and become usable. Old TVs were like this. A *lot* of work went into preheating the CRTs and developing CRTs that could run colder or warm faster. Now you can push the button and the picture comes up in seconds.
If you really believe what you say, prove it - turn on your screen, go get a cup of coffee and wait three minutes before getting to work. Otherwise admit that this is a very important thing. Not the only important thing, but it is up there.
"Mother Nature" (Score:1)
The two most powerful designed interfaces I use are my computer keyboard and my (digital) piano keyboard. To me they are intuitive, but that's because I use them every day. In both cases I am rarely doing one thing at time. In both cases no one has devised a mechanism which improves significantly on them---in performing the same tasks, that is.
However, although they nod towards the "human"---the key sizes relate to the size of human fingers---neither interface could in anyway be described as "humane", in the sense referred to in the book, especially from the point of view of the new user.
This is the nature of life. Life is difficult. Life is hard work. (If you'd heard my piano playing you'd know how painfully true this is. But this is only a matter of time; practice makes perfect.)
The point I am trying to make is that the most powerful and popular interfaces are the ones which offer maximum speed and control---not the ones which have the shallowest learning curve, not the ones which focus on one task at a time, not the ones which confuse the user least.
Remember learning to walk, or learning to invert your vision? Quite a few million years of evolution went into "designing" the interfaces between your brain and your legs/eyes. It still hurts learning to use them. But look at the pay-off. I mean literally...
Re:Fitts' law (Score:1)
Re:Just read this myself (Score:1)
Re:It's good to see a modern, fresh perspective... (Score:1)
There are gadget libraries on the Amiga - MUI and BGUI - that have been doing dynamic layout since 1992 when HTML was new (and maybe didn't yet have forms). It always annoyed me that I couldn't easily do this when building a Windows GUI (which, thankfully I haven't had to do for a while now).
MUI allows a lot of user configuration of both look and feel - globally and specifically for individual applications. It's not exactly skinning, but because so many Amiga applications use MUI this is a much more powerful feature. You can see an example of customised MUI windows [sasg.com] on its web site.
Re:Modalities (Score:1)
Re:It's good to see a modern, fresh perspective... (Score:1)
Re:Different Strokes for Different Folks (Score:2)
Have you looked at XMLterm [xmlterm.com]? It's a strange hybrid between a web browser and an xterm. You can use it with 'pagelets' such as xls, which is like ls(1) but produces HTML where each filename is a clickable link - so you have a simple directory browser in your command line window. Also xcat can display many file formats directly in the xmlterm window (anything Mozilla can display, essentially).
I don't know whether this sort of thing will take off, but it's certainly worth a look as a possible way of combining CLI and GUI.
Re:No file hierarchies? (Score:1)
Possibly through a set of filters/queries? With no filter, *everything* on your system would be presented. You could create filters to specify files by date (creation/modification), "subject" (whatever you decide that should be), type, or a variety of user-specified attributes. Filters would have the ability to be combined with AND and OR operators. NTFS has some of these capabilities, but no support as far as I can tell, I understand BeOS (or its file browser) has something like it.
Re: I'm confused (Score:1)
Modes and vi (Score:2)
For commands to become habitual as quickly as possible some interface-guidelines are given.
No, for commands to become habitual you need to practice them. People have a difficult time learning the intricacies of vi because they don't use it for 100% of their text editing. Once I started thinking about every command I entered in vi before doing it (such as hitting 10j instead of the down arrow 10 times, or the various ex commands) they quickly became habitual. No interface is intuitive automatically (except, of course, the nipple :-) ). True, some are easier, but those that are the most powerful are usually those that require the most effort to learn.
This doesn't just apply to vi, of course, but anything sufficiently complex on a computer. Stick with one way and learn it.
Re:Palm Simplicity (Score:2)
As a new owner of a PalmOS device, I couldn't disgree more.
The palm is very simple. It doesn't cover a lot of ground. You can select buttons, you can enter text, you can choose menus, and the like. This leaves no excuse for the laundry list of issues I've had with my device so far. To list a few:
You may not agree with all these issues, but I believe you will find most of them stand. The Palm device, for something so trivial, suffers from far more than its share of usability issues.
Re:good ideas from raskin (Score:1)
--
The trouble with intuitiveness is.. (Score:1)
So go ahead, find the 'one and only one, obvious way' to achieve everything you need to do to generate your content. I'll still be here trying to
Oh and vi (or at least vim) has incremental searches too:
:set incsearch
Addressing several comments (Score:2)
Naturally, a review can't build up a supporting argument for the points made in the book. Therefore it's no suprise that several posters have refuted some of Raskin's proposals using arguments that Raskin addressed in his book. Here's a summary of counterpoints to several of the most common responses I found in this discussion. Also tossed in are an occasional citation from The Invisible Computer [amazon.com] by Donald Norman. It is an excellent companion to Raskin's book.
and
Human beings are far more similar cognetively (psychologically) than they are different. Before we worry about designing interfaces that are optimized to those differences, we must first develop interfaces that are best suited to the %99.99 ways we think and behave alike. One example cited in the book is that human beings can only concentrate on one thing at a time. Raskin writes that the current state of user interfaces is nowhere close to being ready for the minor tweaks to our differences.
The above comment was arguing against having a Single Way to Do It. It's partially refuted by the previous counterargument. It's further refuted by Raskin's coverage of "habituation". The degree to which an interface becomes easy to use is directly proportional to the degree to which users can develop habits using the system. Having more than one way to do it prevents or slows down habituation by forcing the user to make a concious choice about which way to do it. The UI is also complicated by increasing the number of commands in the system.
Actually, they are mutually exclusive. Having a "single way to do it" means having a single, system-wide command to perform an operation. See above counterargument for why a single command is good. Furthermore, if having the user "choose" involves setting a preference that changes the way the system behaves, then you've created a mode. Modes are bad, m'kay?
Several posts talk about "intuitive" interfaces. Raskin points out that there is no such thing. People usually mean "familiar" when they talk about intuitiveness, i.e. a UI that is like another UI they have already learned or become habituated to.
In the The Invisible Computer Norman writes that there are 3 equally necessary legs to support a product: Technical, User Experience, and Marketing. Canon Cat had two of the three licked, but failed on marketing. That doesn't diminish the value of it technical and user experience achievements.
Yes, buy the book. Raskin's proposal for an interface without file hierarchies, file managers, or "files" per se is my favorite part. He addresses categorizing, organization, and finding documents.
This one's hard to address completely in a single paragraph. In short: efficient interfaces that allow habituation are both friendly and capable of expert use. Raskin dispells the notion that "user friendly" means "dumbed down". A humane interface is usable and powerful. A much better argument is in the book.
Actually, Raskin proposes something more like 0 applications, 1 system, N number of user created documents, 100's of commands that work in all documents. As an example he writes about the "calculate" command. Say a user wants to type in a formula and calculate the result. In contemporary systems, the user must run the spreadsheet application, type the formula into one of the approved formula entry text areas, and execute the formula. Raskin proposes that the user should be able to select any text, anywhere, and perform a "calculate" command on it (perhaps by pressing "calculate" on their keyboard). The selection is replaced with the result (which could be "can't calculate", "Not a Number", "Divide by Zero", etc.). The original formula is preserved in the document. So calculating a formula is reduced to: 1) type in formula 2) select formula 3) calculate selection.
Raskin directly addresses OpenDoc. OpenDoc and OLE (and it's ancestors) are still the wrong approach. The "different editors" are still applications, just disguised. It still has the same mode problems as seperate applications depending on which part of the document I'm editing, i.e. "am I in spreadsheet mode right now, or am I word processing mode, or am I in paint program mode?".
Citing The Invisible Computer again, Norman would say that these should be seperate devices. That is, one device for the creation and navigation of content and information. Another device for playing games. Norman proposes abandoning the general purpose computer and instead having multiple, less expensive devices focused on a single activity.
Re:good ideas from raskin (Score:2)
And therefore, all those that are subject to abuse must be visionaries? MicroSoft must therefore truly be the most innovative company out there.
True, they laughed at the Wright brothers and they laughed at Marconi. They also laughed at Bozo the Clown...
I take solace in knowing that if I'm modded down, it will only because I, too, am a visionary.
Re:Different Strokes for Different Folks (Score:1)
Absolutely true. Which is why I cannot agree with Raskin's assertion that there should only be one way to do something. (Or at least the reviewer claims that he makes such an assertion. I haven't read the book myself.)
Re:TIMTOWDI (Score:1)
That was the exact point of the quote. Mr. Levy did not like the idea of Graffiti.
??? (Score:1)
The command line model does not dictate *anything* about the way a program parses its elements. The things that you type after the program's name are passed to the program as an array of strings, and may be parsed by the program in *literally* any way it wants. Moreover the program may choose to ignore its command line elements and take over the screen, if it so chooses using the curses library or something to create an entire kind of text-based gui.
Moreover the syntax of - flags is not anywhere NEAR standard. I could give you at least 12 ways the program could choose to parse the way the - flags are passed, which is impressive when you consider how simple the damn thing is, and frustrating if upon using a certain program you realize that only one of those 12 ways and you don't know which.
The unix CLI is at least as flexible as X; it just isn't as poorly designed or ugly ^_^ but there are NO constants.
Amen to your last paragraph, however, very much.
/me bows
Opendoc (Score:3)
This sounds like opendoc. Does Raskin actually mention that opendoc is what he's talking about? Is opendoc what he's talking about?
How does Raskin propose that this state he advocates in his book be reached? If he wants to follow the opendoc model of applications being split up into as many tiny interlocking pieces as possible, with a framework for pieces from disparate apps being allowed to interlock with one another as long as they operate on the same kinds of data, then how does he propose that the parts be coordinated together in such a way that they appear, as he wants them to, modeless and consistent?
Basically what i'm asking is this: The things he proposes (a single modeless system that does EVERYTHING and in which every bit of interface is consistent with every other) are things which are extremely difficult to achieve unless every bit of said system is written by the same company. Does he suggest any way that disparate groups could work on the creation of a system such as he proposes without the different voices involved ruining the purity of the whole thing-- like, the people writing apps start doing their own thing, and eventually the differnet parts of the system become inconsistent again.
I also would be curious to see what Raskin would think of the mac os x "services" model, which attempts to revive the opendoc idea with in a less extreme-marxism manner-- applications can rewrite parts of themselves as "services" that do certain actions on certain types of data. If the user is using a program which uses a type of data that a service the user has installed can work with, the user is allowed to let the service wrest control of the data from the application and operate on it. Is this good because it's a step in the right direction, or bad because unlike opendoc the focus remains on the application and not the document?
Re:TIMTOWDI (Score:5)
In an ideal world, there should be only one way to do it, BUT the USER should be able to determine what this one way is.
In my humble opinion, the direction that GUI design needs to and, inevetably-- i have no idea when, but it HAS to go this way eventually-- will go, is in the direction of infinite modularity: the direction of seperating style from content, seperating form from function. Up until this point, interface design has been a constant war between the macintosh "all applications should act roughly consistently with one another" mindset, where you take a single set of guidelines and stick with them, and the Xwindows/UNIX "interface guidelines are fascism" mindset. The UNIX mindset has an extremely good point-- but makes the mistake that it just trades off apple's single point of interface fascism for a billion tiny points of interface fascism, one for the author of each application. The author of each application is still the one in control, and is in control only of the application they created. The user is in control of nothing. And from the user's standpoint, being powerless over the behavior of a bunch of applications that all act the same (as on the mac) is better than being powerless over the behavior bunch of applications that all act differently (as on UNIX).
Ideally, the person who dictates interface behavior would be neither Apple nor the application designer, but the user. Ideally Apple and the application designer would both offer *suggestions* as to how the application would behave, and the user would determine which, if either, of these suggestions the application would follow.
Ideally, eventually we will move to where programmers don't say "i want a text field, two buttons, and a menu with these specific sizes and positions", they will say "i want a text field that you can do these two specific things to and which you have these options for, and the text field and the potential buttons and the potential menu have these specific relationships to each other", and the system will build the interface based on the rules the user prefers.
Hmm. I'm kind of rambling now, and i have to go. But how i was going to end this is by saying that the direction things should take from here, and the direction things ARE taking from here (at least with the things that GNOME and apple are putting forth) is that common things like text fields and scrollbars should be done by a single systemwide service, but *abstractly*-- such that the user can augument the behavior of those text fields or whatever at will; and that we should move toward the model used by CSS, the glade files of the GNOME api, and the nib files of apple's Cocoa API-- where you define functions, and then define an interface, and then link the two together. I, the user, can open up the nib files inside of these mac os x applications i use, and because the relationship between interface and function is abstract rather than hardwired into code, i can rewire that relationship-- i can delete, rearrange, and redesign the function of interface elements of existing programs in InterfaceBuilder despite not having the source code to the program. This is the way things should be.
OK.. i'm done. thanks for listening :)
"I have a firm belief that computers should conform themselves to my behavior, and not the other way around." --Steven Levy, on the original Newton release of the Graffiti pen input system now used in the palmpilot
Re:Different Strokes for Different Folks (Score:2)
I've had to deal with this kind of situation before and your example is flawed. The people that this statement refers to are people who refuse to learn, either through direct teaching or through experience. Many times these are the same people who refuse to follow detailed instructions and are then confused and indignant when things don't work. Also the people who I am thinking of will use a software package for years and never attempt to learn more about the package or become more efficient using it.
My second point is that comparing the effort to understand symlinks with the ability to perform eye surgery is really lopsided. I can, hopefully, explain the concepts of symlinks to anyone who isn't a vegetable and is willing to learn in 5 minutes, it takes years of study and hard work to become even basicly competant in the medical field.
Computers are not all Magic Faries and Pixie Dust!!
Re:Different Strokes for Different Folks (Score:2)
Ding, Ding goes the bell!!
I wholeheartedly and emphatically agree. I was possibly a bit too inflamatory in my initial post but this is exactally the kind of thing I would like to see. A computer is a tool, and the full power of that tool should be available to the end user. It should be reasonably unobtrusive but allow for continuous learning and improvement, I learn new things every day that allow me to be a more efficient and accurate computer user/admin.
Re:Different Strokes for Different Folks (Score:2)
Note: I wrote much less imflamatory posts above and below. Check them out as well if this just makes you mad.
Also Note: I have little respect for people who refuse to learn anything about the world around them.
No, that's not the point, the point is that you are an idiot, IMHO, if you continually use your screwdriver to pound nails even after someone else takes time out of their busy schedule to show you how to use a hammer.
Now this is something that I agree with, I just wish that this gem of a statement wasn't surounded with such garbage.
Well, this is a really bad example, even I haven't had cause to use symlinks in my home directory. I usually use symlinks as an administrative tool. A better example for our hypothetical laywer would be one who refuses to learn how to use templates or mailmerge in their wordprocessor, even though these features would help immensely with their common task load. I've seen reasonably intelligent people who use spaces and tabs in their wordprocessor to center text instead of using the center feature, or manually making large numbers of changes instead of using search-and-replace.
The weird thing is that since it involves a computer it is socially acceptable to be incompetant. If these people brought this same attitude to other aspects of their job they would be fired on the spot.
Contrast:
Note: The above is a gross exaggeration, but I've seen that kind of attitude before and it is quite frustrating. When I was working in a job completely unrelated to systems administration I used to have people come up to me all the time and ask about basic features in their office productivity software. My job didn't even require me to use the software much, I just enjoyed learning, while their jobs required them to make several slides every day, sometimes with related documentation. It gets frustrating when you are asked the same questions every day, "How do I line up multiple objects onscreen", "What is that search-and-replace thingy you were talking about", "How do I log into this computer again", "What's a network", "I seem to have two drive letters pointing to the same shared area on the fileserver. I didn't want that so I deleted all the files on one of the drives but they seem to have dissappeared from the other wone as well. There must be something wrong with my computer, fix it!", "Backups, what are backups?"
Grrrrrr. Sorry for venting some old, repressed bile.
-1, Redundant (or, why UI is not the problem) (Score:4)
My parents, for example, are competent using any number of applications (all with varying purposes and slightly different UIs). But ask them to change any system setting, and they will either give you a blank look or freak out. They don't have the faintest idea of how to start. They're even wary of navigating control panels until they find the right tab/checkbox.
Fair enough, right? The big realization came when I realized that I'm afraid of system administration too. Especially when it comes to unix systems. Even with all those neat-o little configuration tools that come with Linux now, it can be a nightmare to setup X or networking if things aren't just how you found them.
Compared to these sorts of trials, learning to type the right commands or navigate a heirarchy of menus is easy. Most humans are born with the ability to pick up language, so typing commands isn't too much of a stretch. Pointing and clicking isn't hard either. What we're not equiped to do is manage a lot of detail, or absorb a lot of underlying principles quickly. Until someone manages to address those concerns, UI may be great but human-computer interaction will not move far forward.
--
Re:Maybe he's being too simple. (Score:2)
Because the number of people killed by incompetent computer usage is vanishingly small.
-jon
humane != success (Score:1)
Humane Business Applications? (Score:1)
But my biggest question after reading and agreeing with much of the book is: How the hell does it apply to business applications? I can see how Jef's ideas work well for content-creation like writing, emailing, and drawing. I don't see yet a good way to apply LEAP and non-modal interfaces to typical repetitive business applications like order entry, general ledger, and so on... especially in a networked, enterprise setting. You can't possibly have dedicated keys for every application function, and the "highlight command and press execute" gets you back to a variation on command-line interfaces or menus. Modality or navigation inefficiencies seem to creep back in!
Any thoughts on this among other readers of _Humane Interface_? Jef, are you lurking?
--
Mike
Accessibility=more than one way to do it (Score:1)
Or maybe better, allow users to have their own, potentially system-wide interface configuration, with an interface API that every program receives keyboard, mouse, or other input device events from.
Re:good ideas from raskin (Score:3)
Most importantly, Raskin has a theory on UI and an idealized view of what can be accomplished. Neither of these can be viewed as realistic. An old axiom that comes to mind is: "The best way to make software user-friendly is to limit options."
Sure, let's do away with Beginner and Advanced interfaces to computers... How? Just do away with the advanced interface. Let's do away with Perl and its TMTOWTDI core belief.
Ideally, the computer should learn as it goes along. This is somewhat possible even with the "!grep" syntax of the shells. And even aliasing. When I type "!grep" it remembers the last grep pattern I did. What if grep became more intelligent. So I could say.... "grep messages.log" and it would remember the string I last searched that file for and options.
The next step in UI development is a UI that will learn from its user and adapt. Not a UI that tries to simplify the entire computing experience so that it is manageable by anyone with an IQ over 75. This was the initial design of the Mac OS, and one of it's worst features IMHO.
I'm reminded of VCR plus, the "make it easy to record programs" breakthrough of the '90's. VCR programming was too hard for most people it was reasoned, why not make it easier. Well, sure enough the problems with VCR plus were worse than trying to learn to set the recorder. And the "User-Friendliness" (aka. lack of telling you what's going on) made you reluctant to use it anyway, because you didn't know whether it would work or not until after it was supposed to have worked.
Modeless????? Well, the original modes came out of the fact that noone could decide how these things should work, some liked to insert, some overwrite. My mother has never switched modes, but I do 3-4 times an hour. Again, the answer is to do away with insert or overstrike, or devise something even more onerous that will not look like a mode but still accomplish the same thing I can do with one button right now. Perhaps Raskin would also like to do away with "Caps Lock" in the process. (Yes, that is also a mode.)
While there are some good points, the S/N ratio is well over 50.... And it is many times Raskin himself who is unwilling or unable to give up or reconsider what he thinks he "knows" as truths, often to his own detriment. Much of it stemming from the beliefs he fostered as an Apple developer.
~Hammy
Re:No file hierarchies? (Score:3)
I can see the problems with the huge, multi-level file hierarchies that are present in Windows, Unix and every other system under the sun. People just can't keep track of thousands of files organized in hundreds of directories in a big tree structure. So far, that's the only way to organize the googlebytes of data a typical computer has, and it works poorly.
Windows, the Mac, and the X GUIs GNOME and KDE address this problem by hiding the filesystem whenever it can. In Windows, you click on an icon or navigate the Start menu to start a program instead of finding the executable foo.exe somewhere in c:\Program Files\foo\. Unfortunately, the filesystem still rears its ugly head frequently, and forces people to wander through it.
Maybe a database model would work better than the traditional hierarchical file system for managing all our data - instead of having a tree of files in subdirectories, have a large database that can be queried using SQL-style commands by the geeks among us, and GUIs capable of Doing The Right Thing for every piece of data in the database by using type information in the tables to put things in the right context. Programs would end up in the program table, word processor documents would end up in the document table, and the system would know instantly what it's dealing with when it looks in a table, and if the database is set up correctly, most searches can be made very quickly and be more likely to return useful results.
OK, its a crazy idea that has the potential of being a hundred times more complicated than hierarchical file systems. Anyone else have any ideas?
Re:NeXT scroll bars (Score:1)
As did Smalltalk, where the scroll bars could also "flop out" only when the window was active (so that they didn't take up any space when it was inactive).
This is still an option in Squeak. You can see some examples of it in the Squeak screenshot gallery at: http://minnow.cc.gatech.edu/squeak/683
Re:No file hierarchies? (Score:1)
Mac OS (Classic, not X) does not do that. It does the exact opposite: put the filesystem in plain, obvious, tangible view. The user actually manipulates files and folders, and never touches an abstraction like the Start menu, $PATH or whatever. That's one of the nice things with Mac OS (again, not X), in my experience: you really feel in control of the filesystem.
Mac OS X OTOH hides much of the filesystem and brings Start menu-like abstractions. A nice touch is the package system, where an app (or lib, or whatever) behaves like a single file, yet actually is a whole hierarchy of files and dirs for the OS. Makes packaging complex apps really nice, and allows drag-and-drop installs just like the Old Days.
Re:good ideas from raskin (Score:1)
Re:No file hierarchies? (Score:1)
Re:-1, Redundant (or, why UI is not the problem) (Score:1)
were do you live. we're coming be to lock you in a room till you get over this...
bet you dont even use vi...
nmarshall
The law is that which it boldly asserted and plausibly maintained..
Re:Blah blah well DO something then (Score:2)
Re:Modes and vi (Score:2)
And then there is the fact that if you type 10 (as in your 10j example) and then change you mind and delete the current line (dd), you will have to stop and think whether you are in a mode before you hit esc. Otherwise you will delete 10 lines instead of one.
I use vi all the time, and I don't usually make mistakes, but when I do, it's always a modal mistake. (Even worse is a co-worker who uses caps lock just to type a capital letter. Then he uses vi. A lethal combination. It's painful to watch.)
Commands are gestures. If you are concentrating on your text and not the interface (after the gestures have become habituated) you won't have to think about the interface at all. You will execute the command before you consciously decide what buttons to push.
You are right, vi can become habituated. Once I am outside of vi, in other places I try to use all the same commands. I use ksh set to vi-editing mode so I can edit the command line. Sometimes when I am working on a particularly gnarly command I hit "ESC:wq" to "save" it, which of course doesn't work. Just another mode error. I'm in ksh, not vi. No big deal, just another mode error. But it is a distraction.
Read the book. It's good stuff. Or at least read the summary on Jef Raskin's web site.
Humane Interface (Score:3)
I bought the book soon after that, and as I read it, I was blown away. Sometimes when you read a book, it just gets into your head and it's all you can think about. That's how it was for me.
Unfortunately, although it describes many of the principles by which a Humane interface should be designed, there is not a design for a specific kind of interface. Perhaps it's because there is no one single right way to make an interface, but there are many wrong ways. Software producers continue to make the same mistakes about what they think is user-friendly (yes, including GNOME and KDE by following the WIMP example), but Raskin shows that many of the usual assumptions are wrong (pretty much everything we currently understand about user interfaces, e.g. "icons are user friendly").
After reading it, I felt that if I followed the principles of the book, I too could design a radically different yet vastly improved system for beginners and experts alike. I emailed Raskin with my thoughts. The response to the possibility that I could program such a thing was (paraphrased) "You will need a spec, which I am still working on."
I suppose I am still interested in making an interface with the principles outlined in the book, but I think it would require as much work as a GNOME or KDE project (including applications), perhaps even an entire new operating system, depending on how far you wanted to take it.
Jef Raskin's homepage is here [jefraskin.com]. Be sure to check out the summary of The Humane Interface [jefraskin.com] at least, if you aren't going to read the book.
Re:No file hierarchies? (Score:2)
This is only moderately true even in the real world, and not at all true for computers.
If you paint one picture, you can put it one place. If you make ten prints of one lithograph, you can put them ten places, and none is intrinsically more important than the other. And what if you're a poet? You think of a poem, so it's filed in one place: your head. Then you teach it to a friend. And then you recite it at a public reading. Where is your poem "filed" now?
Computers are even worse. When I create an image with Photoshop, where is it filed? Well, in truth, it exists only as a bunch of variances in magnetic fields on a number of spots scattered across several metal plates inside one or more metal boxes inside my computer. Unless, of course, I'm using a network, in which case it's inside computers that I may not know the location of and may never see. That's the only "fact" about where it is. Hiding the fact that that things go places is more than helpful; it's vital. If I had to specify the exact physical locations of the components of my Photoshop file, it would be ridiculous.
That's not to say that making things work like they physical world is wrong. The metaphor of a desk, complete with a desktop, filing cabinets, folders, and so on made the Mac useful to a lot of people. But there's no need to stick with a metaphor past the point where it is useful.
The web is a great example of this. Thanks to Google, I rarely bookmark anything; I just search for it. And if I do remember something about the URL, I usually remember only the domain name, finding the actual document I'm after via navigation bars and search forms. Would it be possible to force this into a real-world metaphor, where each "thing" is in exactly one "place"? Sure. Would it be useful? Absolutely not.
Re:No file hierarchies? (Score:3)
That's nearly true. Categories and subcategories map well to a particular view of the world. But to make effective use of the hierarchy, you are forced to use that view of the world.
Take an example. Say I create an Excel spreadsheet for a 2002 budget projection for a project for one of my clients. What folder and subfolder do I put it in? The top level of the hierarchy could be my role (personal documents vs business documents); it could be the entity the document belongs to (the client's name); it could be the topic of the document (a budget); the period it covers (2002); the type of document (a spreadsheet); the application used to create it (Excel); the date it was created (2001-05-17); or the name of the project. Or something else entirely.
So which is my top-level folder and which are subfolders? It depends on which I consider most "important" or "intuitive", which varies from person to person and from day to day. Heck, if you grew up with Windows you may believe the best place for an Excel document is in the C:\Program Files\Excel directory. I know secretaries who keep all their files right in with the program files because they never learned to make or change directories.
I haven't read Raskin's book yet, but when I dream of better ways to do this, I imagine history lists, search boxes, and hierarchies with multiple roots and automatic filing of documents based on things like name, date, creator, type, and keywords and categories chosen by the author. So when I'm looking for that budget projection, I can browse based on any of those criteria I mentioned above.
Already reality (Score:2)
http://www.brainfingers.com/technical.htm [brainfingers.com]
Myabe the author of the book can work on an offspin of this product.
Re:No file hierarchies? (Score:2)
The obvious alternative to a fixed category system is arbitrary search criteria.
"Computer, give me the document I was working on last week about Genghis Khan."
"Computer? Didn't I once write something that compared Bill Gates to a hibernating wombat?"
... and so on. The computer can retrieve your document based on any criteria you like, not just the one (or, counting symlinks, the several) you happened to file it under.
Re:Different Strokes for Different Folks (Score:2)
I discussed this issue at some length in a class I took a year or two ago on user interface design. We were taught the value of interfaces built around the idea of discovery of features rather than remembering commands. However, I pointed out an important fact of life in software. The value of software is in large part a result of what we can automate. Any task that requires that I click, even once, each time I perform it because some portion of it can't be scripted sets an upper limit on the productivity I can achieve.
I don't mean in any way to criticize the usefulness of a GUI. GUIs are excellent for first time or infrequent tasks. They are also good for tasks where there are a number of parameters which will be different each time the task is performed. The common factor in each of these cases is that a significant amount of user thought is dedicated to the task in each case. Making the interface obvious and eliminating the need for the user to think about the interface is useful.
Now take the other end of the spectrum. A good example is a nightly backup. My thoughts about the task once it is set up should be minimal. I want it foolproof and quick to execute. I want to put in the tape and run a single, simple command (a script without parameters or a single button click). Better still, I want to change the tape each morning and have it run automatically overnight.
The bottom line on this comes down to the whole reason that users (the "my grandmother" model) don't write applications as a rule. To completely automate something requires an analysis of all of the different execution paths and either automating them or bailing out with decent messages so that you can reconstruct what happened.
What I am talking about is the difference between a user interface and an API. To the typical user, APIs are not important. Most people do not automate any portion of the production of the annual holiday letter to friends and family, and if they do, they want to do it using a mail merge feature with a user interface. Users want applications and are concerned with the interface. Programmers want to create applications, often custom one-offs to save ourselves time and want a good API.
Much of the tension in discussions of user interfaces arises when someone doesn't understand or forgets this distinction, or decides that one is unimportant. How many Slashdot users have grumbled about losing a Unix (substitute your favorite flavor) workstation in favor of a Windows box because somebody made a blanket statement that users' productivity is better under Windows. Hey, it's no secret. For users, people who never code, never even script, ubiquitous GUIs improve productivity. Is my mother going to remember commands for a CLI? Only under duress. But anything that comes without an API hampers programmers in exactly what we do. I'm not interested in pointing and clicking to produce a bug report for my boss each week. I want to write a script that queries the database, generates and formats the report and e-mails it to all the interested parties. If I can get it to query our source control to find out the state of the tracks we are using for the fixes to those bugs, so much the better. I want to embed my knowledge of how to solve that problem into a script and not think about it any more.
The real question is who's productivity you are measuring and on what kind of tasks. GUIs bring up the low end and the average users' productivity. Lest we ever forget, that is a good thing. And they improve productivity on large numbers of infrequently and irregularly executed tasks. That too is a good thing. But they are not the most efficient means of doing everything.
If clicking widgets were the most efficient interface around, why would anyone have keyboards anymore? As I finish this comment, the thought has occured to me that my mental state is different when I am typing the text than when I am dealing with the other issues involved in submitting it. As I type the words, I am thinking only of my words. As a touch-typist, the interface of the keyboard has disappeared from my conscious awareness through years of practice. I have never achieved that state of direct interaction through a pure GUI.
Sounds familiar (Score:2)
Alan cooper proved this when he designed VB (Score:2)
The problem isn't files . The problem is Windows. (Score:4)
M$ Windows is one of the worst things that could have ever happened to GUI design. Microsoft has allowed so much DOS to pollute the Windows UI that what you have is a complete breakdown of the whole "user is in control" file/folder system the mac had. I think many geeks who say "we need to get rid of files and folders. They're too hard for normal people to understand" fail to take this into account.
On the mac, the user could drag a folder with an application to their desktop. We're not talking shortcut, we're talking actual program. If they needed to reinstall the OS, they could drag a folder with an application inside it to a zip disk, re-format the main HD (and give it a name more intelligible than a drive letter), reinstall OS, and then drag the folder with the program from the zip disk to wherever the hell they wanted to. Congratulations, program is now installed. To delete program, drag folder with app to trash and empty. There were no special names that were appended to the regular name of the file (i.e. no filename extensions). The file was simply whatever the user wanted to name it--they could tell what type of file it is by the LATFI method (Look At The Fine Icon). To install a driver (aka extension), drag extension file to extensions folder. To deinstall, drag to trash and empty. And above all, all files, including system files, had "Plain Damn English Names"(tm). Files and folders were easy to understand simply because there wasn't anything complex that one couldn't understand. The macintosh was the most perfect concrete abstraction anyone has ever designed.
Unfortunately, the redmond morons were extremely unwilling, for technical and well as religious reasons, to make sure that good ole CLI DOS didn't contaminate windows. They didn't design the GUI from the ground up as if the command line never existed (like Apple did). They simply made DOS a little bit easier to understand. And they took a simple, concrete abstraction like the Macintosh and made it abstractly concrete. On windows, you have a desktop, technically, but often when you drag stuff to it, you're asked if you want to create a "shortcut"--it tries to discourage you from putting the real thing there. So the desktop as a container breaks consistency with the folder as a container-type object. "You can put anything in a folder, but the desktop? No that's different." Then there's the matter of "My Computer". Your partitions do not sit directly on the desktop. You have to go through this strange layer called "My Computer" that isn't quite a folder and isn't quite a file. Once inside the "My Computer" limbo, you have the drives/partitions themselves, which cannot be given any name you want. You can sort of give an alias, but you must always see c: and a:. Then you've got "Special" folders with "Special" abilities that break consistency with the way that normal user-created folders act. There's "My Documents", which has the weird property of being the first place the file dialog warps to. Then there's "Control panels" and "Dial-up networking" and "Printers" folders, which exist outside any known location on your drives. These folders really don't like having stuff moved in and out of them like the regular folders do, and in that way, they too break consistency. To add something to a folder, the user simply drags it to the folder and drops it. But the user can't (with most Windows software) drag a folder with an application inside from a CD-ROM to wherever the hell they want to put it. They have to go to "Add new software", which many times will put the program into the specially designated "Program Files" folder--yet another folder with strange and unusual properties. And finally, there's system files. Ordinary files and folders the user names can be up to 255 letters of Plain Damn English. But files such as the ones in the Windows folder and those distributed with most 3rd party software--those are just plain 8 letter gobbledygook. With all this needless complexity that's in windows, with all these rules to exceptions to exceptions to rules, is it really any wonder that M$ users face such a tough time navigating through the file/folder system?
Now maybe the concept of the file/folder is a little bit outdated. Jef Raskin certainly seems to think so. I kind of agree with him. But the fact of the matter is that any flaws that the mac desktop metaphor had was made 100 times worse by Microsoft's incompetance at designing user interfaces.I'm confused (Score:3)
System administration is indeed the big problem (Score:2)
The original Mac had a reasonable, but limited, approach to system administration:
This was limited, but explainable. No "Registry" or "System environment variable" crap.
We need a theoretical basis for system admnistration. And the Linux crowd will never get it right.
In the beginning was the command line (Score:2)
If anyone on slashdot hasn't read this, they probably should:
In the beginning was the command line:i nning_print.html [be.com]
http://www-classic.be.com/users/cryptonomicon/beg
It's an interesting look at computers and their users. It goes along with what you say, in general, and elaborates on your statements even more.
Re:problem with the "nipple" quote (Score:2)
So we alnaturally just plain suck.
Man that sucks
Just read this myself (Score:3)
I liked the book a lot because it focused a lot on making the human machine interface more efficient. I pretty much hate gui's that force you to jump through umpteen dialogs to configure something that should take 5 or 6 key strokes and Raskin seems to understand this.
I even had my wife a non-programmer read the section on modalism, which has greatly enhanced her ability to turn on the TV and actually play the TiVo or DVD successfully without calling me.
After reading the book I am even more rabid about my adoration of ViM [vim.org]
One problem ... in the book he talks a lot about products like this Canon word processor that didn't seem like commercial successes. They may have been "humane" or even efficient, but no one bought them. What good is that?
On modes (Score:2)
I don't know how extreme he is on this point, but my sense is he takes it too far. Modes in some form are integral to our interactions with machines, people, and the world.
If we forbid modes, a TV remote control must have an on button and an off button. Nobody wants this, because it's much harder to remember which button is which, than it is to observe that the TV is off and deduce that the button will thus turn it on.
The system should always react in the same way to a command.
Very few things in the real world always react the same way to the same command. You don't behave the same way in the library as you would at a friend's home. Your car responds much better to the gas pedal when it's in drive than in neutral. You read people's moods before deciding how to communicate with them.
People can observe and remember a certain amount of context, and tune their behaviors accordingly. It's frequently convenient to enter a mode, because you can forget about everything not appropriate to that mode. This is just as applicable to computers as to the real world--when we perform different activities, we enter different mindsets and adopt different expectations. Computer interfaces should take advantage of this human capability (with appropriate restraint, of course).
Re:This is the same Jef Raskin... (Score:3)
The Master Of Muppets,
Re:good ideas from raskin (Score:2)
(And anyone who ridicules MacOS classic as hopelessly dumbed down should look into AppleScript -- scriptable applications are pretty common, and you can control them like marionettes. The only reason that Unix doesn't have that kind of fine-grained control is that Unix shell commands are (or were) pretty fine-grained themselves, but that's why we have sockets and SysV IPC...)
/Brian
good ideas from raskin (Score:3)
Instead of trying to understand some of the concepts that may, at first, sound strange - his concepts are riduculed and flamed out of hand.
Flamethrowers grab a little bit of text, twist it - without trying to really comprehend, and proclaim Raskin an idiot. A shame because Raskin has some *very* good ideas that he generally supports quite well.
The only positive from what's going to be a large amount of flames is that Raskin should take some solace that all visionaries are subject to much abuse. Most people are unwilling to give up or reconsider what they think they "know" as truths, often to their own detriment.
Hermaphroditic Freaks and the Interface of functio (Score:2)
Interestingly, this strange, awkward, stale moment pulled me out of the article long enough to cause me to forget what point the author was trying to make.
Let's rewrite a later sentence:
Maybe these rules for interface design apply equally well to good writing for a broad audience, or to attempts at "humor" in general.
Fitts' law (Score:4)
This is the first I heard that this law had a name, but one thing I've wondered about is why most GUI editors have scroll bars on the right and not the left, where most work is done. E.g. for cut-and-paste you have to go all the way to the left to select and all the way to the right to move.
TIMTOWDI (Score:2)
I wholeheartedly disagree. I like to have multiple ways to do things. I try them all and figure out the one that suits me best, then make it a habit.
Someone else's idea of "the most natural way to do x" often times isn't mine. I guess that's why I always set custom keys in games, use emacs, and think PERL is fun to hack around with.
Re:Different Strokes for Different Folks (Score:2)
Guess I'm an idiot for not taking time out of my busy schedule to explore all the things my hammer and screwdriver can do.
And I only drive my car at the speed limit and on the road. I'm an idiot because I refuse to learn how well it does off road or at 150mph.
A computer is a TOOL. Tools are usually considered to be ease work or accomplish something. The point should not be to use the tool but to do something productive with the tool.
I don't care how my hammer works. I don't spend hours studying the forces to optimize the blows to drive the nail in a tenth of a second faster.
So, pray tell, why should my laywer budy understand what a symlink is and why it's better than a shortcut in Windows when all he wants to do is write a cease and desist letter? Sure he's refusing to learn because he deems it as useless knowledge. He doesn't care the particularities of why his car pings. He just wants his mechanic to fix it.
Please, don't be arrogant about computers and overstate their importance in the world. They're just a tool. Don't you be one, too.
I see some problems... (Score:2)
firstly, not all applications can use the same interface. Period. Can you imagine trying to play Quake using a MS word (or emacs.. or vi.. or pretty much anything not remotely related to 3d navigation) interface? Or for that matter, trying to type out that document the boss wants in 20 minutes if you have to use a mouse/keyboard combination to pick letters from your latest Quake/text editor mix?.. it just cant be done.. its like trying to create a car using the "interface" of a pair of rollerskates.
secondly.. modes.. I agree that modes can be a pain in the @$$ (especially when Word decides to change modes for you when you dont want it to).. but really.. how are you going to get around this?.. if you want to allow bold/italic/underscored/different font/etc text, you need some way of specifying what you want. You obviously cant have an enormous keyboard with a set of keys for bold, another set for italic, etc (not to mention that that would likely be more confusing and distracting than accidentally bolding some text).. that leaves basically two options.. dont allow bold (unlikely), or have the text editor do it all for you.. and we've all seen that THAT doesnt work very well these days.. (maybe when they get direct-brain-control working and we can just think it the way we want:)
thirdly is legal.. MS has already attempted to meld things together, and found themselves slapped with a nice big antitrust lawsuit (not that I dont agree with that, but the point is there).. so we know this cant be done within one company (at least, not one that has any reasonable sized portion of the market share, and smaller companies dont have the funds for the R&D needed I'd guess).. open source/free software might be able to have a go at it here, but I doubt anyone would bother on any sort of large scale
oh.. and about that bit about removing the file structures and stuff from the user view.. uhmm.. no.. a user might not care if the extension is ".rtf" or ".doc", but they DO care where their files are put.. having a billion files in the same place is NOT cool.. thats why we invented directories and libraries and whatnot in the first place! organization!
Re:Fitts' law (Score:3)
Another is Hick's law, which (roughly) states that the time to choose between a set of N alternatives (e.g., in a menu) is proportional to the information-theoretic entropy of the decision. If the N choice are equally probable, the entropy is the logarithm of N.
Academic HCI and the application of Fitt's and Hick's Laws to this domains begin with Card, Moran, and Newell's 1983 book "The Psychology of Human-Computer Interaction". I recommend chapter 2 for those particularly interested in the psychological basis of their recommendations. This is the book that introduced the GOMS modeling technique that Raskin covers as well.
Personally, I've never found much insight in this line of thinking about HCI. Knowing this stuff does not make you a good designer of computer interfaces. Artistic flair and empathy for users plays a crucial role that is not addressed in this tradition, and perhaps not addressable at all within mechanistic approaches to cognition. All of this is IMHO, of course.
Card and Moran were at Xerox PARC in the late 1970s and early 1980s, I believe, which is where Raskin was before Apple hired him to develop the machine that would be taken over by Jobs and become the Macintosh.
Visible interfaces (Score:2)
I wonder how this work when you are attempting to make GUI's accesible. Its very hard to provide this sort of utility for visually impared people, as speech synthesis only comes one bit at a time. You can not listen in parallel.
"That is one of the downsides of the otherwise efficient command line interface"
Unless you hit tab of course.
Phil
There's more that one way to do it.... (Score:2)
BTW, who said that originally?
Raskin's sort of a curmudgeon (Score:2)
Raskin has some curmudgeon in him. Just read this page on what Raskin thinks we should do with the word "intuitive [asktog.com]".
Unlike Tog, who seems to think that anything the Mac interface does is The Right Way To Do It, Raskin appears to receive some influence from actually experimentation.
On the other hand, going hard-over against modal interfaces seems a little odd. As far as I know, only one study has ever been done on this sort of thing: "A Comparative Study of Moded and Modeless Text Editing by Experienced Editor Users", by Poller, M.F., Garter, S.K., appearing in Proceedings of CHI '83, pp 166-170.
One interesting conclusion from that study:
Modalities (Score:2)
I haven't read the book, of course, but interface designers almost always have to take advantage of modality. Even in the "simple" case of driving an automobile, the accelerator has a "modal" effect on the behavior of the vehicle, depending on which gear is selected. In computer interface design, the most fundamentally modal input device is the keyboard. In a windowing system, it has dramatically different effects depending on which window is selected.
The important item to keep in mind is the number of modalities and the complexity of their interactions. Keeping modalities obvious and intuitive will keep most users out of trouble. (e.g. a tiny little "Ovwrt" isn't always an obvious indicator that you've left Insert mode). Then all you have to do is figure out what's "intuitive" for your users :-)
It's good to see a modern, fresh perspective... (Score:2)
Over the past 20 years though, interface design has been pretty much dictated by OS vendors such as Apple and Microsoft. This trend is going full circle with the proliferation of applications to which various 'skins' can be applied, producing an entirely new look. This trend has appeared, I believe as a direct result of the advent of dynamic layout methodologies similar to HTML. Let's all remember it wasn't always so.
Apple was the first to implement draconian interface specifications for 3rd party applications (since 1984). Microsoft has generally left it more to the compiler vendors, although all design seemed to sprout from Microsoft anyway...
It's thrilled to see new thought in this vary important area of the field.
--CTH
--
Another good book about user interface... (Score:3)
This is the same Jef Raskin... (Score:2)
...who said that "Unix is backards" [slashdot.org] (or is at least alleged to have said that. I think it would not be a surprise that he takes issue with a lot of Unix truisms...
Maybe he's being too simple. (Score:2)
I suppose it is a criticism of the English language, just as it is of, say, Perl, that in different contexts the same word means different things. Many might argue that this is not a defect and that people intuitively use English despite all its modes because they are so well practiced in it.
This is not to say that computers should not be designed to be easier, merely that there is perhaps some utility in having different programs function differently. And even having one program respond differently to the same input at different times. For instance vi and emacs (and other programs for editing code) respond differently to a carriage return depending on the situation, by tabbing to a different point in the following line. Though I could disable this feature, I do not since it is so useful.
Oh, yeah, modality sucks. (Score:2)
You know, I hate using paper and pencil. It's so modal. If I want to erase something, I have to either turn my pencil upside down, or drop the pencil and grab an eraser. Oh, and painting is even worse! I constantly have to change modes by changing instruments. Why can't my palette knife do everything?
The lesson here is don't give users everything they think they want. In my experience, users don't even really know what they want. For applications, etc., focus on use cases: how people want to use the system. Design the interface to make those cases easy to do. Basically, goal oriented interfaces (which is what that VB guy who wrote Inmates Running the Asylum is a big fan of). Unfortunately, Microsoft took his ideas and created "wizards" which are the most useless goal-oriented interfaces I've ever had to use.
Even for hardware, like VCR's, focusing on how users want to use the system would make more effecient interfaces. Users really just want to say, "Record 'Manimal' at 8 o'clock on NBC" and have their VCR know what the heck they are talking about. They don't want to go into their TV Guide and type in numbers to a VCR+. Another option is the way I think TiVO does it where you can have an on-line list of programs, you select the program you want and indicate you want to record it. It's better than working through an on-screen interface to try to tell the VCR what time to start and stop recording.
Another example of goal-oriented interface is the Wacom Intuos digitizing tablets. You can get three extra pens, each withs it own unique digital id, and an airbrush pen (which looks and feels like an airbrush). Each pen can then be assigned a unique function in Photoshop (or other app). This way, the mode becomes obvious as picking up a different pen or picking up the airbrush tool. The tablet knows you've chosen the airbrush tool, so it automatically changes to airbrushing. You take the first pen, and it knows you want to draw. You change your mind, turn the pen around to the "eraser" on the end, and voila you're erasing. This is an intuitive mode change that I think is far more useful than trying to make a modeless interface.
The point here is the interfaces must be engineered towards what people want to do with it, and that you behave in a consistent way. For example, if you have modes, then make sure that there is a standard way in each mode to get out of it (like the ESCAPE key). Users can learn arbitrary interfaces, as long as they are consistent and geared to helping them do what they want to do.
Re:Humane Interface (Score:2)
I don't think right and wrong are adequate categories here. In my opinion, the main problems in interface design are: there are not strict, absolute rules, and there are so many rules and trade-offs. As a result, most interfaces are a compromise between all those rules; they have to be. The interface designer's job is a multi-dimensional optimization problem.
Take colors for example. Designing a color scheme is a really hard task. One has to watch kind of side-effects, like brightness and contrast. One has to consider the possibility of color-blind users. Some combinations of colors strongly suggest a metaphor to the user, and should not be used if this is not intended; this applies namely to red-yellow-green which is likely to be associated with traffic lights -- in the western world. So cultural differences have to be taken into consideration, too.
Software developers tend to simplify and generalize the rules. That's what they are used to from programming. And they are seeking for recipes, for patterns. But there are no recipes and not many patterns for designing the user interfaces of sophisticated applications. Interface design needs skill, experience, the ability to watch your own creation with another one's eyes, and a will of iron to take every single problem of one's trial users seriously. And, of course, managers who do not sacrifice usability to deadlines and project plans.
Re:On modes (Score:2)
On page 180 of Raskin's book you will find a case study showing this is not necessarily true. Raskin's example is about saving workplace state to a floppy disk, and he had the same considerations about modes. After describing a one-button solution, he concludes that the modality in this situation depends on the user's model.
Applied to the remote control: If one thinks of an on function and an off function behind the same button, there are modes. But if we watch it as, for example, a general "change actrivity state" function (in Raskin's book it's a "do the right thing with the disk" function), there are no modes.
Tuple spaces (Score:2)
You request data through abstract queries like "what was that cartoon I was looking at one evening about Windows 95?", or "get me everything about the Wilson project".
Unfortunately it would require strong AI to achieve.
Re:It's called "flame the flamer flamers" (Score:2)
No file hierarchies? (Score:2)
How would one organize files? Ok, say you don't have "files" but just documents. Is the device only good for editing one at a time? Or is he suggesting a replacement by a more random-access structure, so that you need to give it keywords to find the file you're looking for?
Categories and subcategories map well to the real world. If I have only a few files (documents, whatever) they can be in one place. But if I have many, then I'll want them to be organized somehow, and hierarchically makes a lot of sense for that.
My fear is that he's advocating a "sufficiently advanced technology" interface that somehow magically finds the file that you want based on your limited, possibly ambiguous query. If anyone has read the book and knows more specifically what he's advocating, please reply to this.
Tim
More features: good or bad? (Score:2)
The idea behind building feature-rich software runs something like this: figure out what a program will generally be used for, and come up with a set of tasks -- use cases -- which are representative of what the majority of users will want to do with the software. Design the software around the use cases, using the ease and efficiency of each of the use cases as a yardstick for the program's success. If users demand more features, the market for the software expands, or the company simply has more resources to build up the software, then add more use cases and repeat.
This works reasonably well, but it has a serious flaw: as the amount of functionality and thus the number of use cases go to infinity, the number of features goes to infinity. Think of M$ Word -- at a certain point, the wizard that will make multi-column address labels is useless to me, because there are so many damned wizards I don't know where to find it.
What software really should try to do is maximize robustness while minimizing the feature set that achieves that functionality. If you want your software to achieve a wider base of functionality, you have to make your interface's features more generalized, and more expressive. As the amount of functionality goes to infinity, the feature set stays bounded, and just approaches some sort of Turing-complete programming language.
So when designing software, don't ask "What features should we add?" Ask "What functionality can we add, and what features can we generalize in the hope of removing others?"
I realize that this is all very idealistic, but it seems like a guiding principle that could keep software bloat in check.
Re:1 app with 1000 feat. vs. 1000 apps with 1 feat (Score:2)
There is certainly a glut of command-line tools, though. The problem with the 1000 UN*X apps is that
But I suppose this happy mess is really in the nature of a sort of the Darwinian free-for-all of UN*X.
I'd really like to see a shell built on a single powerful set of list and pure function operations that would neatly generalize pipes, redirects, backquotes and lots of command-specific features into a single, more general scheme. It's silly that I have to know about the output format of ps to kill all processes with some specific thing in the command line:
kill `ps -ef | grep nuj | cut -c9-15`
I should really be able to say something like:
kill (ps ? (#.cmd==nuj)).pid
...where ps just outputs some list of named hashes, and I don't have to play ASCII games or know anything about the format of its output.
-END OF PONTIFICATION-
proof is in the pudding (Score:3)
The really controversial idea, though, is to abandon applications altogether. There would be only the system, with its commands, and the users' content. Even file-hierarchies should vanish along with the applications, according to Raskin.
I don't see anything controversial about that. There were several systems that mostly behaved that way before the Macintosh. The idea was to eventually move towards a system with persistent object store and direct manipulation of objects, eliminating the need for applications and allowing easy reuse among applications. Generally, the way that was implemented at the time (early 1980's) was via "workspaces" in which all data and objects lived, together with implementations in safe languages that allowed code to co-exist in a single address space.
What killed this approach was, in fact, the Macintosh, later copied by Microsoft. Using Pascal and later C++ as its language made it very difficult for "applications" to safely coexist or to share data in memory. The Macintosh and Windows merely put a pretty face on a DOS-like system of files and applications.
I'd still recommend reading Raskin's book. It does have historical significance, and it gives you a good idea of what mainstream thinking in UI design is. Raskin himself, of course, has contributed significantly to the state of the art and the body of knowledge he describes. There are some ideas in there that are probably not so widely known and that may be helpful.
But don't turn off your brain while reading it and don't take it as ghospel truth. UI design is not a science, and many of the connections to experimental results are tenuous at best. And a lot more goes into making a UI successful than how quickly a user can hit a button. If anybody really knew how to design a UI that is significantly better in practice than what we have, it would be obvious from their development of highly distinctive, novel systems and applications.
Change is a good thing (Score:2)
Computer interfaces really haven't changed in at least the last ten years. They have gotten prettier, maybe faster, but there has been no fundamental change in all that time. In fact, as far as I can see they have only changed twice in computing history (Let me know if I am wrong, lord knows I am not omniscient). The changed from simple printouts/registers to character based interfaces then from character based interfaces to our present GUI interfaces.
I can't see this lack of change as a good thing. In an industry where rapid change is standard it surprises me that interfaces have remained stagnant. So if this book can foster some original thinking and perhaps some newer more efficient designs for interfaces, we should probably take it seriously, not matter how controversial the content.
As always just my 2cIt May Seem Backwards (Score:2)
I personally just purchased this little ditty at Border's not four nights ago, and flew through it. What Raskin does, and why, in the end, you can not help but agree with him, is to walk up to each concept he presents clearly, and explain that why, yes, UI designers often think this way, I think they are forgetting x and y, and thinking of z. Very similar to Aquinus's theolgica in voice.
What he points out, quite poignantly, is that engineers design the interfaces, not people. We engineers are a different breed, and, when creating our interactions, we tend more towards Neal Stephenson's essay "In The Beginning..", where every intimate interaction with machine, as well as each step in the process, should be visible to us.
However, as Raskin states, this is rarely transfered into an interface that makes it's current state obvious to it's user, and instead, engineers often assume the user a vast idiot.
Unlike Tog, who claims to be the definitive source of interfaces, Raskin admits that his ideas are difficult to actually place in an interface, and instead, seems to prefer the book be used as a guideline for meeting halfway.
Raskin also explains in very simple terms, just how the human mind is thought to work, and that if these fundamental are true, then why would working with a computer feel like x, when, in review, it should be z.
For instance, perhaps the most harped upon item in the work is the idea of modes. What Raskin believes is that the human brain takes 10 seconds to switch gears between two tasks, and that modes actually slow a user down. He seems to believe there should only be one mode, 'edit', giving the user complete control of all aspects of the document at any time. Most UI people believe modes save users from themselves, allowing certain changes to a document only when in mode x, and you have to click 'Ok' to enter. However, Raskin solves this argument by simply calling for the Universal Undo/Redo button ( as found on the Canon Cat), the inability to delete anything permanently, and automatic saving of documents.
One of the more intricate ideas, and one everyone seems to wonder about is the Raskin's idea to remove file-heirarchies.Here's how Raskin believes it should work; One, within a machine there are only documents. The system itself is invisible to the user. You don't launch applications, or move around system extensions, you just work. With this removed, you then look upon hard drives as actually an infinitely long scroll, where documents simple begin to occupy blocks of space, and can grow within their space indefinetly. While Raskin never explain exactly how this will work with hard drives that become radically more crammed, I don't think it was ever his intention to explain exactly how to do anything.
What many reviewers miss is Raskin's assertion that his ideas, the sum of this book, are just that, ideas, and guidelines. Bascially the book is a statement of 'In an Ideal world, this world happen.' However, he does provide several methods for testing current interfaces, as well as examing ways to improve them now.
I recommend it as shelf-material for anyone who works day-to-day in UI. At it's best, the book is a profound guide for the ideal that computers, or any device for that matter, should be less-complicated, more thought-out structures.