When Good Interfaces Go Crufty 668
An anonymous reader writes "A good article over at mpt on why good interfaces go crufty." A nice read before or after a visit to the Interface Hall of Shame.
In the long run, every program becomes rococco, and then rubble. -- Alan Perlis
My cruft-o-meter: (Score:5, Funny)
Re:My cruft-o-meter: (Score:5, Insightful)
More programs really should allow that sort of functionallity. Now I have to see if anybody is working on it for The Gimp.
- RustyTaco
A bit of history... (Score:5, Informative)
yes kids, that's right! the c64 had a GUI.
Re:A bit of history... (Score:3, Informative)
http://www.cmdrkey.com/
You can find info on Wheels, the GEOS upgrade at:
http://userdata.ia4u.net/maurice/gbrowse/whshot
Or, check ebay. Copies of GEOS 2.0 show up there all the time, usually for only $10 or so.
When good interfaces go crufty (Score:3, Interesting)
In Vernor Vinges sci-fi novel A fire upon the deep [rambles.net], he presents the idea of software archeology. Vinges future has software engineers spending large amounts of time digging through layers of decades-old code in a computer system like layers of dirt and rubbish in real-world archeology to find out how, or why, something works.
So far, in 2002, this problem isnt so bad. We call such electronic garbage cruft [ddj.com], and promise to get rid of it [refactoring.com] someday. But its not really important right now, we tell ourselves, because computers keep getting faster [intel.com], and we havent quite got to the point where single programs [mozilla.org] are too large [blogspot.com] for highly coordinated teams to understand.
But what if cruft makes its way into the human-computer interface? Then you have problems, because human brains arent getting noticably faster. (At least, not in the time period were concerned with here.) So the more cruft there is in an interface, the more difficult it will be to use.
Unfortunately, over the past 20 years, Ive noticed that cruft has been appearing in computer interfaces. And few people are trying to fix it. I see two main reasons for this.
Microsoft and Apple dont want to make their users go through any retraining, at all, for fear of losing market share. So rather than make their interfaces less crufty, they concentrate on making everything look pretty [theregister.co.uk].
Here are a few examples of interface cruft.
In the 1970s and early 80s, transferring documents from a computers memory to permanent storage (such as a floppy disk) was slow. It took many seconds, and you had to wait for the transfer to finish before you could continue your work. So, to avoid disrupting typists, software designers made this transfer a manual task. Every few minutes, you would save your work to permanent storage by entering a particular command.
Trouble is, since the earliest days of personal computers, people have been forgetting to do this, because its not natural. They dont have to save when using a pencil, or a pen, or a paintbrush, or a typewriter, so they forget to save when theyre using a computer. So, when something bad happens, theyve often gone too long without saving, and they lose their work [adweek.com].
Fortunately, technology has improved since the 1970s. We have the power, in todays computers, to pick a sensible name for a document, and to save it to a persons desktop as soon as she begins typing, just like a piece of paper in real life. We also have the ability to save changes to that document every couple of minutes (or, perhaps, every paragraph) without any user intervention.
We have the technology. So why do we still make people save each of their documents, at least once, manually? Cruft.
The original Macintosh, which introduced graphical interfaces to the general public, could only run one program at a time. If you wanted to use a second program, or even return to the file manager, the first program needed to be unloaded first. To make things worse, launching programs was slow, often taking tens of seconds.
This presented a problem. What if you had one document open in a program, and you closed that document before opening another one? If the program unloaded itself as soon as the first document was closed, the program would need to be loaded again to open the second document, and that would take too long. But if the program didnt unload itself, you couldnt launch any other program.
So, the Macs designers made unloading a program a manual operation. If you wanted to load a second program, or go back to the file manager, you first chose a menu item called Quit to unload the first program. And if you closed all the windows in a program, it didnt unload by itself it stayed running, usually displaying nothing more than a menu bar, just in case you wanted to open another document in the same program.
Trouble is, the Quit command has always been annoying and confusing people, because its exposing an implementation detail the lack of multitasking in the operating system. It annoys people, because occasionally they choose Quit by accident, losing their careful arrangement of windows, documents, toolboxes, and the like with an instantaneity which is totally disproportionate to how difficult it was to open and arrange them all in the first place. And it confuses people, because a program can be running without any windows being open, so while all open windows may belong to the file manager, which is now always running in the background menus and keyboard shortcuts get sent to the invisible program instead, producing unexpected behavior.
Fortunately, technology has improved since 1984. We have the power, in todays computers, to run more than one program at once, and to load programs in less than five seconds.
We have the technology. So why do we still punish people by including Quit or Exit menu items in programs? Cruft.
As I said, the original Macintosh could only run one program at a time. If you wanted to use a second program, or even return to the file manager, the first program needed to be unloaded first.
This presented a problem when opening or saving files. The obvious way to open a document is to launch it (or drag it) from the file manager. And the obvious way to save a document in a particular folder is to drag it to that folder in the file manager. But on the Mac, if another program was already running, you couldnt get to the file manager. What to do? What to do?
So, the Macs designers invented something called a file selection dialog, or filepicker a lobotomized file manager, for opening and saving documents when the main file manager wasnt running. If you wanted to open a document, you chose an Open menu item, and navigated your way through the filepicker to the document you wanted. Similarly, if you wanted to save a document, you chose a Save menu item, entered a name for the document, and navigated your way through the filepicker to the folder you wanted.
Trouble is, this interface has always been awkward to use, because its not consistent with the file manager. If youre in the file manager and you want to make a new folder, you do it one way; if youre in a filepicker and you want to make a new folder, you do it another way. In the file manager, opening two folders in separate windows is easy; in a filepicker, it cant be done.
Fortunately, technology has improved since 1984. We have the power, in todays computers, to run more than one program at once, and to run the file manager all the time. We can open documents from the file manager without quitting all other programs first, and we can save copies of documents (if necessary) by dragging them into folders in the file manager.
We have the technology. So why do we still make people use filepickers at all? Cruft.
This last example is particularly nasty, because it shows how interface cruft can be piled up, layer upon layer.
In Microsofts MS-DOS operating system, the canonical way of identifying a file was by its pathname: the concatenation of the drive name, the hierarchy of directories, and the filename, something like C:\WINDOWS\SYSTEM\CTL3DV2.DLL. If a program wanted to keep track of a file in a menu of recently-opened documents, for example it used the files pathname. For backward compatibility with MS-DOS, all Microsofts later operating systems, right up to Windows XP, do the same thing.
Trouble is, this system causes a plethora of usability problems in Windows, because filenames are used by humans.
What if a human renames a document in the file manager, and later on tries to open it from that menu of recently-opened documents? He gets an error message complaining that the file could not be found.
What if he makes a shortcut to a file, moves the original file, and then tries to open the shortcut? He gets an error message, as Windows scurries to find a file which looks vaguely similar to the one the shortcut was supposed to be pointing at.
What happens if he opens a file in a word processor, then renames it to a more sensible name in the file manager, and then saves it (automatically or otherwise) in the word processor? He gets another copy of the file with the old name, which he didnt want.
What happens if a program installs itself in the wrong place, and our fearless human moves it to the right place? If hes lucky, the program will still work but hell get a steady trickle of error messages, the next time he launches each of the shortcuts to that program, and the next time he opens any document associated with the program.
Fortunately, technology has improved since 1981. We have the power, in todays computers, to use filesystems which store a unique identifier for every file, separate from the pathname such as the file ID [apple.com] in the HFS and HFS+ filesystems, or the inode [webopedia.com] in most filesystems used with Linux and Unix. In these filesystems, shortcuts and other references to particular files can keep track of these unchanging identifiers, rather than the pathname, so none of those errors will ever happen.
We have the technology. So why does Windows still suffer from all these problems? Cruft.
Lest it seem like Im picking on Microsoft, Windows is not the worst offender here. GNU/Linux applications are arguably worse, because they could be avoiding all these problems (by using inodes), but their programmers so far have been too lazy. At least Windows programmers have an excuse.
To see how the next bit of cruft follows from the previous one, we need to look at the mechanics of dragging and dropping. On the Macintosh, when you drag a file from one folder to another, what happens is fairly predictable.
Windows has a similar scheme, for most kinds of files. But as Ive just explained, if you move a program in Windows, every shortcut to that program (and perhaps the program itself) will stop working. So as a workaround for that problem, when you drag a program from one place to another in Windows, Windows makes a shortcut to it instead of moving it and lands in the Interface Hall of Shame [iarchitect.com] as a result.
Naturally, this inconsistency makes people rather confused about exactly what will happen when they drag an item from one place to another. So, rather than fixing the root problem which led to the workaround, Microsoft invented a workaround to the workaround. If you drag an item with the right mouse button, when you drop it youll get a menu of possible actions: move, copy, make a shortcut, or cancel. That way, by spending a couple of extra seconds choosing a menu item, you can be sure of what is going to happen. Unfortunately this earns Microsoft another citation in the Interface Hall of Shame for inventing the right-click-drag, perhaps the least intuitive operation ever conceived in interface design. Say it with me: Cruft.
So, Windows designers made a slight tweak to the way shortcut menus work. Instead of making them open when the right mouse button goes down, they made them open when the right mouse button comes up. That way, they can tell the difference between a right-click-drag (where the mouse moves) and a right-click-I-want-a-shortcut-menu (where it doesnt).
Trouble is, that makes the behavior of shortcut menus so much worse that they end up being pretty useless as an alternative to the main menus.
They take nearly twice as long to use, since you need to release the mouse button before you can see the menu, and click and release a second time to select an item.
Theyre inconsistent with every other kind of menu in Windows, which opens as soon as you push down on the mouse button.
Once youve pushed the right mouse button down on something which has a menu, there is no way you can get rid of the menu without releasing, clicking the other mouse button, and releasing again. This breaks the basic GUI rule that you can cancel out of something youve pushed down on by dragging away from it, and it slows you down still further.
In short, Windows native shortcut menus are so horrible to use that application developers would be best advised to implement their own shortcut menus which can be used with a single click, and avoid the native shortcut menus completely. Once more, with feeling: Cruft.
Meanwhile, we still have the problem that programs on Windows cant be moved around after installation, otherwise things are likely to break. Trouble is, this makes it rather difficult for people to find the programs they want. In theory you can find programs by drilling down into the Program Files folder, but theyre arranged rather uselessly (by vendor, rather than by subject) and if you try to rearrange them for quick access, stuff will break.
So, Windows designers invented something called the Start menu, which contained a Programs submenu for providing access to programs. Instead of containing a few frequently-used programs (like Mac OSs Apple menu did, before OS X), this Programs submenu has the weighty responsibility of providing access to all the useful programs present on the computer.
Naturally, the only practical way of doing this is by using multiple levels of submenus thereby breaking Microsofts own guidelines about how deep submenus should be.
And naturally, rearranging items in this menu is a little bit less obvious [microsoft.com] than moving around the programs themselves. So, in Windows 98 and later, Microsoft lets you drag and drop items in the menu itself thereby again breaking the general guideline about being able to cancel a click action by dragging away from it.
This Programs menu is the ultimate in cruft. It is an entire system for categorizing programs, on top of a Windows filesystem hierarchy which theoretically exists for exactly the same purpose. Gnome and KDE, on top of a Unix filesystem hierarchy which is even more obtuse than that of Windows, naturally copy this cruft with with great enthusiasm [freedesktop.org].
Following those examples, its necessary to make two disclaimers.
Firstly, if youve used computers for more than six months, and become dulled to the pain, you may well be objecting to one or another of the examples. Hey!, youre saying. Thats not cruft, its useful! And, no doubt, for you that is true. In human-computer interfaces, as in real life, horrible things often have minor benefits to some people. These people manage to avoid, work around, or blame on user stupidity, the large inconvenience which the cruft imposes on the majority of people.
Secondly, there are some software designers who have waged war against cruft [joesacher.com]. Word Places Yeah Write [yeahwrite.com] word processor abolished the need for saving documents. Microsofts Internet Explorer for Windows [microsoft.com], while having many interface flaws [phrasewise.com], sensibly abolished the Exit menu item. The Acorns RISC OS [riscos.org] abolished filepickers. The Mac OS uses file IDs to refer to files, avoiding all the problems I described with moving or renaming. And the ROX Desktop [sourceforge.net] eschews the idea of a Start menu, in favor of using the filesystem itself to categorize programs.
However, for the most part, this effort has been piecemeal and on the fringe. So far, there has not been a mainstream computing platform which has seriously attacked the cruft that graphical interfaces have been dragging around since the early 1980s.
So far.
Discuss [phrasewise.com]
Code Archiologist (Score:4, Insightful)
If code is working and shipping you don't throw it away. What I did was decouple the various patterns that I found and made something that was more modern. I did all of this work in C. It involved a lot of grepping and creating interfaces.
Just because code is old an kludgey doesn't mean that it is not valuble. Elegance is getting paid a million dollars for a device that only costs a fraction of that to manufacture.
Bottom line: if you don't have the cash you can't stay in business.
There is a difference between bad code and old idiom code. Archaic code that is shipping and works is much more valuble than pie-in-the-sky new code that no one wants.
somewhat OT (Score:5, Insightful)
I just wanted to know, WHY on earth MS would use the directory name 'Program Files' when so often installers and path names, etc. can only work with the 8.3 format and end up calling it 'PROGRA~1'. Plus, the space in the file path screws up some apps... just WTF were they thinking? Why not call it 'Applications'? At least that abbreviates to 'APPLIC~1' which sounds slightly less silly
Re:somewhat OT (Score:4, Insightful)
Re:somewhat OT (Score:3, Informative)
Re:somewhat OT (Score:3, Informative)
Re:somewhat OT (Score:5, Informative)
Re:somewhat OT (Score:3, Interesting)
The only thing that causes problems is c:\windows\desktop, as "desktop" does get translated (it's c:\windows\bureaublad in Dutch).
Re:somewhat OT (Score:5, Funny)
Really? Then why does my Win2K install have C:\WINNT, but no C:\Windows?
Re:somewhat OT (Score:2)
I just wanted to know, WHY on earth MS would use the directory name 'Program Files' when so often installers and path names, etc. can only work with the 8.3 format and end up calling it 'PROGRA~1'. Plus, the space in the file path screws up some apps... just WTF were they thinking? Why not call it 'Applications'?
Actually, "Programs" would have been better. No abbreviations.
Michael
Re:somewhat OT (Score:2)
Everything is fine it seems, until the first application install itself into "C:\Program Files\TheApp" anyway since it doesn't use the proper Win32 API calls to get the localized "special folder names". Arrgh.
With around 30 apps in the Program dir, I always seem to have 3 or so apps in a "Program Files" dir. At least I know the developer of the programs since it's very apparent and I can complain if I wish.
Re:somewhat OT (Score:3, Funny)
All this needs is just some tweaking in the registry and some few tricks and you never have to live with bills-insane-directory-name-choices again...
Same for the start menu, I just organize it as topics. It's not hard to do, and most people would do it if they wouldn't be afraid of breaking everything. Because, just deal with it: users are scared of "breaking their computer". I actually learned a lot by breaking my computer, but that was in the DOS days and with PCTools in my hands. I now know why my dad made backups so often
Re:somewhat OT (Score:5, Insightful)
This one's easy actually - a friend of mine independently came to the same conclusion as me on this one, which is that Microsoft deliberately chose "Program Files" as both a 'long filename' and a filename with a space in it precisely to speed the adoption of long filenames. They did it to bring into sharp relief any program that didn't support LFN properly. Remember, Windows 95 was the time when they introduced their "Designed for Windows" logo, which at the time was a pretty big deal, and as far as I can remember, pretty much mandated support for LFN.
The PROGRA~1 is ugly, but it only happens on old programs - I certainly now use it as an indicator of quality in a Windows app (it reflects how much the author respects the user experience).
Now, if you want a real gripe, I hate the way most apps just plain don't work if you install them somewhere other than Program Files. I also hate the way most apps have a slavish belief in whatever path information they stored in the registry, meaning you can't ever move an installed app. I try to make my own apps as location agnostic as possible (Mac users: feel free to gloat at this point, with considerable justification).
Tim
Crufts - Not only software! (Score:5, Informative)
It comes out of designing without taking into account user actions and reactions. This subject is un-fashionably called "Industrial Design", but is becoming fashionable again....
Re:Crufts - Not only software! (Score:5, Insightful)
Good user interface is hard and even though the author claims that "we have the technology now", some of the ideas just reflect his personal preference and are not really the obviously better design. Many times the interface which users would find "working as expected" requires nothing short of magic (or artificial intelligence, whichever arrives first). "Industrial design" has one major advantage over user oriented design: It's learnable. Learning a system which itself adapts its behaviour to the user can be really frustrating and time consuming because it follows more complicated hidden rules. That's why "power users" turn off as many automagic functions as possible.
The real user interface crimes are when well researched principles of perception are ignored: Making every icon round by dropping the actual icon into a marble of colored glass may be pleasing to the eye, but it's working against the way we recognize patterns. Adding bevelled lines around and between everything, even when there is no logical or functional separation, makes user interfaces distracting. And those are just the worst offenders in the graphical representation area.
Re:Crufts - Not only software! (Score:5, Insightful)
Re:Crufts - Not only software! (Score:4, Informative)
Re:Crufts - Not only software! (Score:2)
Like your html code which breaks the display on my browser: you must close the html tags in the same order you opened them:
b u font "your title"
Fix it so i can read it, please.
Re:Crufts - Not only software! (Score:2)
Re:Crufts - Not only software - dogs too!!!! (Score:3, Interesting)
Why do we have to save our work by hand? (Score:4, Insightful)
On the contrary, my dear Watson. What if I change something in my Word document, but later on decides it was no good and wish to discard it? Nope, sorry. My old document is already rewritten with no turning back. Or is he suggesting that everyone should always make a copy of a document before editing it, just in case? Wouldn't THAT seem terrible unintuitive?
The "Save" function is one thing that separates the wordprocessor from a real pen and paper, and it certainly has it's uses.
Re:Why do we have to save our work by hand? (Score:2, Insightful)
Indeed maybe they will happen, they would provide a great reason for corporate users to upgrade and would potentially be a better way for somebody like Microsoft to protect their turf than obfuscating file formats.
Re:Why do we have to save our work by hand? (Score:5, Insightful)
I really like the article's idea. I've lost a lot of work in my lifetime due to software crashes, power outages, or clicking things without thinking. On the other hand, it's not often that I change things temporarily and then revert back to the saved version. (Probably 20 to 1 ratio) With this paradigm, it'd be easy to get in the habit of marking a document as 'temporary' with all the benefits.
It might make even more sense when content management platforms mature. These platforms keep track of different versions of a document, allowing you to revert back or see document evolution with ease. Then you can have it both ways, your latest changes will always be saved, and you can revert to previous versions. But of course, then you'd have the non-intuitive 'Save This as New Version' button, since you wouldn't be saving your documents manually anymore.
Re:Why do we have to save our work by hand? (Score:5, Insightful)
There's no need for a "save as new version" button: the program will do that automatically when you exit, when you switch to a different program, or when the timeout/max diff condition occurs.
What is needed in addition is something that should be intuitively obvious: "create a new document based on this one". This will create a "fork", and doing so will cause the program to ask you by what name you'd like to refer to the new document (as it should whenever you create a new document). Perhaps this is what you were talking about.
We've gotten so used to working with low-level files that methodologies like this get discarded automatically by developers. But that should be done only if there's a lot of hard data that shows that users actually have a harder time dealing with it that way. That may be true of users who are used to dealing with files, but I strongly suspect people who are new to computers will have an easier time with an application that doesn't know about "files" but only about "documents". The system should keep track of the mapping between the two, and the filestore should never be seen directly except with a tool designed to manage it.
All IMHO, of course.
Re:Why do we have to save our work by hand? (Score:3, Insightful)
If you insist on complicating computers beyond pen and paper, at least use the Undo button, which already exists and whose use is fairly intuitive. To see what your work looked like 6 months ago, simply click Undo 60,000 times.
Re:Why do we have to save our work by hand? (Score:3, Funny)
I'm not saying it wouldn't be neat; you would basically have enough data to do an instant-replay of the entire document creation process. But it doesn't sound too practical to me.
Re:Why do we have to save our work by hand? (Score:3, Insightful)
Like an eraser, or a bottle of tippex, that's what the "undo" button is there for. All this means is that the save process has to be a bit more sophisticated and store the last n changes.
Q.
Re:Why do we have to save our work by hand? (Score:2, Insightful)
Re:Why do we have to save our work by hand? (Score:4, Interesting)
A more sophisticated file system could help us there. During the day, we rsync [samba.org] the development areas every 15 minutes. It takes a trivial amount of space and CPU time. Yet for years I was stuck in the metaphor of doing nightly backups and telling folks they couldn't get back the files they changed in the morning.
The point is that saving files or versions in case we stuff things up shouldn't be our problem. We should have 'hard' commit points (this is a published document/reviewed code). Between then 'soft' checkpointing could be managed by the OS.
Re:Why do we have to save our work by hand? (Score:3, Insightful)
What should happen is:
If the software/computer crashes, on the next startup, it prompts the user that it has a file stored; would you like to open it? Options are open, leave or delete.
Some of these options are already available in other software; vi will store its buffers if it's killed off, and Word (and other word processors) have autosave. It's not rocket science to implement the missing features.
Re:Why do we have to save our work by hand? (Score:5, Insightful)
File systems that support multiple streams (like NTFS) could save undo information in a separate stream. "Not everyone has such a file system," you might say. I say, whatever -- if we're talking about moving forward here, we'll have to go past FAT and other beginner's file systems.
We're not talking about taking away something that's required for usability today. We're talking about improvements for the next generation. Get over your "Save" command. You'll be able to undo beyond the automatic save.
Re:Why do we have to save our work by hand? (Score:3, Insightful)
The correct solution is version control built in at the OS level. This would mean all file types would have to have a useful diff defined.
That would also allow multiple people to work on the smae document with control over how their changes are merged and so on. After all, all these tools were developed for source control not because they are related to programming, but because they are related to editing; any appliction which can be sen as an editing operation could benefit.
Re:Why do we have to save our work by hand? (Score:4, Informative)
VMS has done this for a decade or more. Every time you edit a file, you get 'file.txt;1', 'file.txt;2' and so on, which you can pick up at any point and continue editing. It's semantically similar to cvsfs, where every file saved revisions itself. Implementing cvsfs globally could be "A Good Thing[tm]" overall.
Re:Why do we have to save our work by hand? (Score:2, Insightful)
"But versioning contains the concept of saving", you say. Not necessary, have a look at the "undo"-feature. Undo is a simple and crude form of versioning, without any mentioning of saving.
How about an undo that lets you say "take this document back to the way it looked two hours ago"?
Think about it.
Re:Why do we have to save our work by hand? (Score:2)
Re:Why do we have to save our work by hand? (Score:3, Insightful)
I've never had to click "save" on my palm pilot. To undo a change you simply click "undo". 'Nuf said.
Re:Why do we have to save our work by hand? (Score:5, Interesting)
I wrote a GTK+ text editor that saves the document on every keystroke with a frequency limiter so that you don't save more than once every 5 seconds. It used a background thread for the saving so the user interface didn't hiccup, and every save file contained a complete undo/redo history to the beginning of the document's life. It had no save buttons, only "open" and "close". I never finished it because it was plaintext-only anyways, so nobody was ever gonna use it.
Re:Why do we have to save our work by hand? (Score:4, Insightful)
IMHO all desktop apps should have built in version control, so instead of File->Save you do File->Tag this version and give it a description. All editing changes are saved to disk immediately they are made (this is only a few bytes per second, no problem on modern machines) and you're prompted to make another version tag before quitting the app.
There's no longer any disk space argument against saving all versions of the document, all the time. At least not for wordprocessing and most 2d graphics, small spreadsheets etc.
Collaborative working with merging in different sets of changes (a la CVS) would be tricky to implement, depending on the application: it might require storing a list of commands executed rather than the current state of the document.
Re:Why do we have to save our work by hand? (Score:3, Interesting)
I'm currently evaluating a new IDE for developers in my area, IntelliJ IDEA 3. Anyway, it has gone away from the Save-button paradigm as well. Whilst there is still a manual save button, which you can hit whenever you like, it background saves a lot. Whenever you compile it autosaves, whenever you close a file it autosaves (without prompting) and whenever the app window loses focus it autosaves all open files.
The way it gets around the "but I didn't mean to save those changes" problem is with a local VCS. Every time it does an autosave it keeps a version, and automatically deletes those older than x days (configurable). This works alongside your "real" version control (say CVS or Bitkeeper) - essentially the local one protects you from those "oops" moments when you accidentally write over a modified file, and the external VCS does what it does now, holding actual useful revisions for a file forever.
You could compare this system to a multi-level undo feature which spans saves, but it's better than that - as being a proper VCS you can visually diff between versions, label particular versions etc. It takes a while to get used to the "what do you mean it's already saved!" aspect, but it's really very neat, and it means you essentially will never lose any work again.
Oh and yes, it ends up using a fair amount of disc space, but so what? It's a lot cheaper than my time. I could easily see a similar system working well with a wordprocessor, in fact Word has something similar with it's revision tracking. All you have to add in is the ultra-frequent non-obtrusive autosaves, and removal of unimportant old versions to keep size manageable.
I recognize this... (Score:4, Informative)
I work day in day out on a ten year old system. I do not use the term archeology, however I frequently find what I call 'fossils'. Parts of code that are still there, but are never executed. Fields of the database that should have been deleted but are still there, and are still updated, though no program ever uses them. A system has to be sufficiently large however, to experience this. But actually funny to read about this.
Flawed (Score:5, Informative)
Fortunately, technology has improved since the 1970s. We have the power, in today's computers, to pick a sensible name for a document, and to save it to a person's desktop as soon as she begins typing, just like a piece of paper in real life. We also have the ability to save changes to that document every couple of minutes (or, perhaps, every paragraph) without any user intervention.
Yes we do, but for starters a computer is a tool. You tell the computer what to do, the computer does not tell you. Sure we have autosave, but any sensible application auto-saves to a different filename so that if you decide to abandon your changes, you can just quit, not save and revert back to your original format. If you quit a document, you'd still have to agree. What happens when you do want to commit those changes to your file but you don't want to quit? You have to "save".
Fortunately, technology has improved since 1984. We have the power, in today's computers, to run more than one program at once, and to load programs in less than five seconds.
Here the author obviously hasn't used a PocketPC. With the PPC its very very easy not to close applications. What happens? The system slows down to a crawl as it tried to run 5 or 6 different applications. Again, this is the user being in control of the computer. I want the ability to close applications when I'm not using them. That is my decision, not the computers. It's the desktop analogy. Once i've finished with a book, I put it away because otherwise my desk gets cluttered. I don't leave it out because otherwise my desk gets full and working becomes a problem. Sure, we could get around this by having the PC unload or suspend applications that aren't used in a while - but how does it decide? Just because I've not typed something into Word for the past 30 minutes doesn't mean that I'm not using it. You'd get to the point where the cleverness of the OS/Application was causing me more hassle as it tried to be helpful and suspend stuff for me.
Fortunately, technology has improved since 1984. We have the power, in today's computers, to run more than one program at once, and to run the file manager all the time. We can open documents from the file manager without quitting all other programs first, and we can save copies of documents (if necessary) by dragging them into folders in the file manager.
What about if the application is taking over the whole of the desktop? I'll have to minimise and then drag. Having said that though RISCOS (I think, the one on the Archimedes) used to allow that. You hit Save and the icon appeared for you to drag somewhere. Best thing was that you could drag from one application into another to have it load in there. Neat. But very wierd.
As for inode stuff, sounds neat. But I know so little about that type of thing, I wouldn't even know it's feasible.
So in short, some good ideas, but some of them just aren't practical or possible and would end up being a bigger annoyance than it currently is.
Re:Flawed (Score:2, Insightful)
...nice in that you could see exactly where you were saving it, but not nice in that if you clicked "save" and type in a name only to realise the window you want to save it in isn't visible.
In the good old Acorn vs. Amiga wars of yore the Amiga's file requesters were a deadly weapon.
Phil, just me
Re:Flawed (Score:3, Informative)
Of course you need to have some means to identify a document. As mpt points out, pathnames are an extremely lousy way of doing so. Rather you want an inode with some associated metainformation which may or may not include a name. The whole concept of a name plus a three letter extension is flawed.
Each type of document has a number of useful metainformation items associated. Obvious ones are date of creation, last date of editing, user that created it. In the case of a bitmap, a small thumbnail might be handy. Of course users should be able to add descriptions and short names as meta info.
Most of this meta information can be generated automatically. There is no need to bother the user with this.
Take for example mp3 files. A major problem with these files is that they must have a filename and that they may also have meta information (which more often than not does not match either the filename or the contents of the file). You would want this the other way around. An mp3 file has meta information (like artist, title, tracknr, etc.). Based on this info, programs like filemanagers may query the metainfo to generate a small name (e.g. artist - album - track -title) that is displayed on the screen. There is no need for this generated string to be the unique identifier for the file!
Beos actually got this right. Every file in the beos filesystem could have an arbitrary number of meta attributes associated with it. Programs like mp3 players, mail readers etc. actually used this to organize data in the beos filesystem.
You are right that is a huge undertaking to fix this since this would require reengineering a lot of applications and operating systems. That was the whole point of mpt's article. Existing programs are a cumulation of decades of fundamental design errors. Many severe usability issues can be traced to these design errors from the seventies and eighties. Many programmers are unaware of this and have actually duplicated the errors in efforts to improve usability. Their workarounds are symptoms of rather than solutions for the problems.
Re:Flawed (Score:3, Insightful)
I don't think some of his proposed ideas are really solutions so much as alternate means of implementations. Take his thoughts on file saving, for example. As someone who spends time on and off working on larger files, I fully appreciate the flexibility I have now to save a file on MY terms- when I want, how often, and whether or not it will be saved as a new, incremental version of the original. Saving a large file in Painter or Photoshop takes time. I can't see any way in hell that an automated save will not become a huge source of annoyance. I sometimes start work on an image, and decide that I don't want to keep the changes that I've made. That should be my decision.
Personally, I think time would be much better spent on providing systems with transparent backup (give the electronic data a form of persistence closer to that of hard copy) so that recovering from a hard disk failure isn't such a traumatic experience - or even something that people need to worry about.
Skinning == crap! (Score:5, Insightful)
What is intuitive to us is what is standard -- adding new buttons with new pictures, new dials, and other things in a single instance interface only confuses everyone. Even if some of the properties are inefficient, regular GUI standards are the way to go.
Re:Skinning == crap! (Score:3, Funny)
That's what skinning is for! Just change the skin to something that doesn't have colorful, bubbly bitmaps!
Re:Skinning == crap! (Score:3, Insightful)
On the other hand, many 'power users' like to personalise their desktop. My background has purple penguins in ear muffs and my colours reflect this purple rather than the default blue.
If it was possible to change the colours easily in applications as well as the window manager, then I'd do so as well. Only those apps which allow for skinning, due to the over enthusiasm for graphics everywhere, allow changing the colours at all.
If skinning is bad, then why allow us to 'skin' our desktop by changing the background?
--Azaroth
Re:Skinning == crap! (Score:3, Insightful)
You say that 'power users' like to personalize their desktop. I'd say, at least in the Windows world, the opposite is true. It's the inexperienced users that get the biggest kick out of themes and GUI cruft. It gives them a false sense of control over their computer. Power users know that having a snake shaped cursor only gets in the way.
How many hardcore computer people do you know who run a copy of Webshots? Bonzi Buddy? It's all graphical masturbation. I'm glad that it makes you happy to get all Martha Stewart on your desktop. Unless it changes the functionality in some way, I simply don't care.
Linux is a different story because you get much more control over the GUI. You have the control to change how things work, not just how they look. That's a good thing. However, even under Linux, it seems like most skins try to:
Rip off MacOS
or
Rip off Windows
The skinners are like Spinal Tap with the volume turned up to 11. If transparency is good, they make everything at least partially transparent. If goofy, bubbly icons with drop shadows is trendy, they make the goofiest and bubbliest. I'm in the camp of keep it simple and make it work. Spending three days to get the metallic pastel alpha blend on the widgets "just right" doesn't do much for me.
If skinning is bad, then why allow us to 'skin' our desktop by changing the background?
So I can instantly tell which box I'm on when I use a flaky KVM? So Joe Luser can have his brain damaged offspring smiling back at him? Because $COMPETITORS_OS does it and they did it to compete? Just because you can do it doesn't make it a good idea.
Re:Skinning == crap! (Score:3, Insightful)
It also saves you the time and space you waste downloading a skin that is 10x the size of the tinny application you just installed, repeat for each individual application. What a waste.
Re:Skinning == crap! (Score:3, Insightful)
Answer me this... why do icons in Windows have titles underneath them? Why do ANY icons have titles underneath them? Do you even care what the picture is? No, you read the titles. Why? Because Microsoft failed to standardize on them and make them as commonly known as a picture of a "Stop" sign, or a "green light" as we see while driving every day.
What is intuitive is what works or what is used, not what is standard, because there are no standards in this space. Making "pretty pictures" under the buttons makes them understandable, in the absence of other descriptive features (such as a title).
Lazy Linux Programmers (Score:2, Funny)
No, the hackers aren't lazy - they're just too busy trying to ape the MS windowes look and feel....
Represents a Computers Working (Score:2)
When saying this I'm not saying that there are flaws in the interfaces of today, and I do agree on that the open source movement lack much of the initiative to make things better. However, making all computer tasks behave as things in 'the real world' will not work. I'm not even sure if all metaphors used today are good. It's better to reflect the actual tool, than try to make it look like something that it is not, this will only result in disapointed and surprised users.
Re:Represents a Computers Working (Score:2)
With modern technology, why shouldn't it write one character at a time to the disk? Or at least write it into the disk cache.. I know it's less than optimal but your word processor isn't going to stress the system out by doing this on a modern PC, and it would be much more intuitive to the user (assuming that the user wasn't already familiar with having to press the "save" button.)
Q.
Re:Represents a Computers Working (Score:3, Interesting)
In the Microwave Oven example, I don't think it is at all obvious to joe user how it works. All they know is that they can set it for a specific length of time, and maybe different power levels, and that it heats stuff up. I go a few steps beyond that in that I have a chemical physics degree (from a long time ago!) so that (if I dig deeply!) I can remember stuff about molecular orbitals and energy absorption etc.. but I still don't know exactly how the microwaves are produced because I never asked anyone - nor do I need to in order to get my hot ready meal every evening (I work long hours, OK? don't hassle me over my eating habits!
(before you all start explaining microwave ovens to me I really don't want to know - if I did I could work it out for myself or look it up!)
So, I maintain that a user really only needs to go as deep as required for the job they are doing, or possibly a tiny bit deeper for that little extra understanding. In the case of someone using a computer for word processing they need to know how to turn the machine on, how to start the word processor and how to type - they don't need to know about disk caches, or saving in blocks rather than bytes, or CPU cycles.. Knowing about these things does nothing to help them do their job. There's no reason *why* they should have to know these things - computers are there to help people, after all, not to hinder them.
It's the job of the software engineer (and the product manager, and maybe the UI designer) to make life as easy as possible for the user, based on the known knowledge that user has.
Q.
More cruft! (Score:5, Insightful)
1. The alphabet you are typing corresponds to a shortcut in the window, and the window happily closes itself, having done god-knows-what-damage-it-did.
2. It slows down your pace, disturbs your thinking process, and by the time you close the window and move to the position you were in before, one more word gets added to your swearing vocabulary.
Re: and windows from other applications popping up (Score:2, Interesting)
While I really like GAIM for its all-in-one approach to messenging protocols, the authors deserve a kick in the balls for having windows that constantly raise to the front, every time someone sends you a message. The result is, you are typing an e-mail or programming and, all of a sudden, what you typed ends up in the wrong window, simply because GAIM is receiving an incoming message for you. Bad, Bad, Bad GAIM..
People coding window managers should also wake up to the fact that not everyone wants the latest application someone started to pop in their face, while they are returning to another window during the application startup. e.g. If I start OpenOffice and I know for fact that this piece of bloatware needs 5 minutes before the main window comes to screen, so I go back to typing e-mails until the application has loaded, I do not want frigging OpenOffice popping up and assuming thta what I was in the middle of typing was meant for its Untitled 1 document. As such, Xlib and other OSes' GUI libs should completely remove the ability for an application to request that one of its windows be raised and brought to the foreground since, anyhow, window mangement is the responsibility of... window managers.
Re: and windows from other applications popping up (Score:3, Informative)
Got an extra 10 seconds? Take a peek in the preferences dialog, and turn that behavior off.
Re:More cruft! (Score:3, Insightful)
Re:More cruft! (Score:3, Interesting)
Really, if MS and KDE want to keep the feature of the "SetFocus" function, they should at least put a window manager level preference setting to turn it off. If set, no application will be able to take the focus away from another.
If I want to type in a window - I'll click on it or tab to it or whatever.
When web browsing I have a habit of opening multiple windows at once so that web sites can load in the background while I read other sites. At home I just run Mozilla, and tabbed browsing works great - about the only interruption is when I need to enter a master password (which shouldn't interrupt me, but I can live with that). At work I use IE, and I'm constantly pestered by windows raising to the top simply because they've finished loading...
Ditto with applications. First thing I do after I log in is typically launch a few apps of the desktop - before it gets buried. Then for the next minute while login scripts run and the disk thrashes I can't use my computer - not because of it being too slow, but because one window after another keeps grabbing focus simply because it has loaded...
Wish granted (Score:3, Informative)
I've not had this problem except with Windows -- mainly because the apps that suffer most from the "I did something that required CPU cycles, therefore I will tell you about it in a popup" disease seem to be Windows apps. So I'll tell you the Windows solution:
Go to microsoft.com. Find wherever they've hidden TweakUI this month. Download it. (If necessary, download the whole "power tools" thingy that it's a part of.) Install it. (Install the "open cmd.exe at this directory" power tool too, while you're at it.)
Go to the [Out-Of-]Control Panel, fire up TweakUI, and disallow applications to grab focus. There's even a "what should they do instead" selection that lets you make them blink.
Disadvantage: some programs fire off a splash screen, then bring it down and replace it with the real program. Window focus doesn't traverse like that now, so the real program won't start off with focus, even though you the last thing you did was to double-click its startup icon. Minor annoyance only.
There are people working on this... i think. (Score:2, Interesting)
Does anybody know what happened to this project? I'm curious because Sony R&D usually comes up with brilliant solutions for common problems.
He has a point but he dosn't get it!!!! (Score:5, Informative)
In a) he talks about the use of inode numbers as an internal reference used by the system. Regrettably, inodes and other equivalent internal reference numbers used by other file systems under other alternative operating systems can move around. Generally opening by inode is only recommended after first opening by name.
Having more than one pathway to a file as mentioned in d) in windows is most definitely a feature. For engineering reasons a manufacturer may want to keep a set of files from related applications together, however to the user they may be presented somewhat differently. If anything this is an improvement of interface because of the separation between external and internal representations.
As for the problems of moving applications around, that is also an issue with meta information held in INI files or the registery. It is quite possible to make a program easier to move (i.e., by including code to update the file locations), but this isn't often done.
The file/folder metaphor may have probems for newbies but the only real problem is that file (particularly with Unix style file systems) may have more than one name. This is a feature not a bug.
Re:He has a point but he dosn't get it!!!! (Score:5, Insightful)
But Mac OS X, and OPENSTEP/NEXTSTEP before it, manage to keep the Mac metaphor while still hiding the implementation details, and it does it much better. Each application is actually a fairly complex directory structure, and all support files can be hidden within the application itself. This can include movies, help files, whatever--you name it, it's there. Now, to the user, you still have just the application, but the application can suddenly be dragged around at will without disturbing anything. For the application, you now can also guarantee a very rigid directory structure that the user can't even mess with. Next time you're on an OS X system, control-click a program and choose "Show Package Contents," or, if you prefer, cd right into the app. You'll see what I mean.
That's the right way to solve the problem, and that's why he's slamming Windows' metaphor and lauding ROX/OS X app wrappers/packages/bundles.
From the article... (Score:3, Insightful)
Wait...are they suggesting there weren't multi-tasking operating systems in the 1970s? What about Unix? Wasn't next to every system multi-user, multi-tasking in those days (timesharing)?
Did I read that right? (Score:5, Insightful)
So we should get rid of ways to close programs? I dread to think how much you'd have running if the computer is on for more than an hour or two.
Phil, just me
Re:Did I read that right? (Score:3, Interesting)
Let the system handle it. When it's idle, it can clean the processes up itself. Think of garbage collection under Java or flushing a disk cache...
Re:Did I read that right? (Score:3, Informative)
What he is implying is a SDI interface, where you can use the "close this window" metaphor and once all windows are closed, the application is gone. Problem is, not all types of applications work well as SDI. IDEs for one are better off as MDI applications.
Think beyond the today (Score:4, Insightful)
If you're fighting for reasons we need the crappy methods we have today, the very methods the author was talking about, then you haven't thought it through. It's obvious there's a better way than we have now. It will take some intelligent design and programming to make it happen. That's all. We're smart enough to figure out how to make this happen, and shouldn't screw it up making excuses for why we have to keep old methods around.
thought-provoking, but no alternatives (Score:4, Insightful)
Oddly enough, most of his complaints about the handling of files are being addressed in the next Microsoft file system, that's reportedly being based on ODBC (effectively turning the entire file system into a massive database -- the BeOS guys tried and failed at doing something similar).
Perhaps the Windows right-click-drags he vilifies should be an "advanced feature" that has to be turned on manually, and maybe it isn't magically intuitive, but damn, I'd sure like to see him come up with an alternative that allows a user to quickly and easily take files and copy, move, or alias them with a single gesture this easily.
Autosave is not always that great (Score:3, Insightful)
He is of course right that you don't have to save your work when using a pencil. But, on the other hand, the eraser on the other end of the pencil won't wipe out 100 pages of work in half a second by accident either. Personally, I am very happy to take responsibility for losing my data, and eternally grateful that emacs has a 'revert buffer' option!
More generally, why does not exactly like a real desktop equal bad? It's an analogy, right? Does he want files to start curling up at the edges after a couple of years too?
About menus (Score:5, Insightful)
Did you notice the different feel of menus in
common GUIs? Without tricks, it would be hard
to select submenus. You have to keep the mouse
pointer in a narrow 'tunnel'.
MacOS Classic works around that problem by using
a V shaped buffer zone. If you move your mouse
to the right within a certain angle, the submenu
doesn't change.
MS used an inferior workaround. Submenus open with
a delay, and you have to select them slowly or they
won't open at all.
KDE submenus work like the Windows ones. Gnome
behaves like the old MacOS. Sadly enough, menus
in MacOS X now work like the ones in Windows.
The worst implementation is used by Swing.
Submenus open with delay, but close without one.
You have to wait for a submenu to open, and when
your pointer leaves the tunnel, it vanishes instantly!
Re:About menus (Score:5, Informative)
The reason for this is that during the many studies and research that Microsoft did into user interfaces it found that users did not like the fact that the sub-menu option appeared immediately. They actually found it less intimidating when it appeared after a second or two.
It also reduces screen clutter. As you move up a set of menus, their sub-menu doesn't appear until you come to rest on the option you want, rather than all the sub-menus popping open and then closing again as you move. Having all these sub-menus flashing about tended to unsettle users.
Having said all that, there is a registry setting that can increase or decrease this to any number of milliseconds you want. I've no idea where it is, but I do remember TweekUI allowing you to change it.
I don't get a few things in that article... (Score:3, Insightful)
What does he mean with getting rid of Quit and Exit options? Should it (bad idea for obvious reasons) auto-close when I change the application or should they just never close (whoa - look my memory run out).
The alternatives to not exiting apps manually seem horrible to me, so I have to be missing something...?
Windows' shortcut menus
"In short, Windows native shortcut menus are so horrible to use that application developers would be best advised to implement their own shortcut menus which can be used with a single click, and avoid the native shortcut menus completely"
Ctrl+Click = Copy
Shift+Click = Move
Alt+Click = Shortcut
No modifier equals the most likely operation according to Windows. I.e. Shortcut if dragging to start menu, Copy if dragging between local/network drives, Move if dragging between folders on same drive.
The operation Windows will perform is always shown with an icon next to the mouse pointer.
I'm not sure how I'd design a quick move/copy/link operation better myself.
Acorn (Score:3, Interesting)
It is not only an exhaustive reference but also comes with a style guide.
I guess this style guide would be invaluable for non-RiscOS developper, especially after browsing through the interface all of shame...
So, why don't these development suites (visual studio, etc) come with such a book ?
RISC OS save dialog is yet to be bettered (Score:3, Interesting)
The ROX Desktop [sourceforge.net] has gone someway to implementing this on X - rather than blindly re-implementing the Windows Way like so many other projects.
Many good points... (Score:5, Interesting)
My favortie is the Save/Open Dialog box, a relic from the single tasking days, why do people use crippled version of the file browser? Risc OS did it correctly, drag an icon from the app to the browser, or even other application!
Just get rid of the File menu all together.
Finding files is still a chore, I do miss BFS instant and always up to date live queries. Can we please have that, Apple/Microsoft/Others?
Installation of applicatons, what is up with that? Why can't I just copy a file from the CD/Net and be done with it?
File ID's are good idea, I could move apps/files around on BeOS and my shortcuts still worked!
Why can't we implement version control transparently in the filesystems? Hardly due to lack of space? Each save creates a new version. Word processors could even use intelligent versioning, like making a new version for each paragraph or chapter. Does Photoshop save undo in its files?
Right now I am using very Explorer-like client for CVS, and it doesn't matter if I am browsing my hard drive or some remote server in Elbonia, it all looks the same, wonderful!
I have been using XP for a while now, and it is making me quite frustrating. Why can't I use all that NTFS has to offer? (AFAIK, NTFS is pretty close to BFS in features, not sure about Live Queries, though)
What else... yes... Use open standards when data leaves the application!!!!. I can't <em> this enough. BeOS sort of did this with it's Translator service. Your image viewer/editor didn't even have to know how to load/save JPEG or PNG files!
Well, enough of this rant.
J.
Manual Save is not a bad thing (Score:3, Insightful)
Fortunately, technology has improved since the 1970s. We have the power, in today's computers, to pick a sensible name for a document, and to save it to a person's desktop as soon as she begins typing, just like a piece of paper in real life. We also have the ability to save changes to that document every couple of minutes (or, perhaps, every paragraph) without any user intervention.
We have the technology. So why do we still make people save each of their documents, at least once, manually? Cruft.
I don't wan't to name the document until I decide to save it. Does anyone else here want this feature? I create many documents every day, to re-format, print, view differently cut / paste from web for printing, for email, etc... I don't want my hard drive cluttered with this crap. That's why I don't save it. Yet, this Matthew Thomas guy thinks this would be good. I think his first example of "cruft" is a bad one.
The cure is worse than the disease. (Score:5, Insightful)
My idea of hell is an editor that auto-saves code that I'm in the process of hacking up in an editor to let me think about the problem over top of code that already works.
My idea of hell is a platform where every document I've ever opened has no way to close it and no way to exit the application that's got it up in a window, because there;s no 'Quit' or 'Exit' option.
My idea of hell is not being able to drag something in a GUI from one folder to another, because they have an obscure "parent of my parent" relationship, which makes me have to cut and paste the document, instead of just dragging it, because I on;'y have one file manager, which is running all the time, instead of a "file picker".
My idea of hell is symbolic links that get changed when I rename a file out from under them because the OS thinks it knows what I want better than I do, so it's impossible to replace a file with another, while keeping, and the old one, unless you copy it, rename the original, rename the copy, and then edit the original (instead of replacing it).
-- Terry
Points to disagree with (Score:3, Interesting)
Now, if the hardware were to change such that we weren't tethered to mice and keyboards, then I can see some interesting possibilities. But things being what they are, I'm quite content with my shell and VI.
Cruft alternatives have already died (Score:4, Interesting)
Example 1: Single-level store
Actually this one survived in the market - sort of. It was part of the old IBM Future Systems effort, and made it out the door as the System/38, with followons in the AS/400 and iSeries. Single level store says you get rid of that silly distinction between RAM and disk - everything is memory. What we quaintly call main-memory is merely another level of caching to the disk, which is the real memory. Then you make the big thing a database instead of just a filesystem, and it can readily solve pretty much all of his numbered problems in one fell swoop. Was this perhaps something like Longhorn, only about 20 years ago?
The System/38 and descendents has met with success, largely as the closest thing to 'just plug it in and run' that has made it to market. At another level it hasn't been that successful, largely because of its unconventional and rather opaque system model.
As an interesting aside, IBM's first entry into the workstation arena, the Romp microprocessor, also had single-level store capability. (actually expressed in inverted page tables) Then in order to make it more Unix-familiar they mapped a conventional filesystem on top of that. I don't know if Rios and PowerPC followons retained that capability or went to more conventional paging architectures.
Double aside: Romp/Rios/PowerPC are yet another fallout of the Future Systems effort. Any big project has a backup plan, and one of the backup plans for FS was the 801 minicomputer, the original RISC.
Example 2: The OS/2 Workplace Shell
Just a bunch of UI glue, but what a bunch of it! It directly solved the broken link problems, and had a more consistent, if different set of mouse semantics. It also has a group feature that kind of got around his 'quit' problems.
But I disagree with overusing the inode the way he suggests. The inode is an internal structure and isn't meant to have a UI-level life. He really wants access to Data, not to Filename or Inode. Does he really want a database-type filesystem?
My own fantasy is a semi-conventional filesystem, but instead of a conventional directory structure use a semantic network. The role of directory navigation is taken on by relationships. It's an incomplete idea at the moment, though.
No Exit menu item in IE is progress??? (Score:3, Funny)
I want IE dead, and I want it now!
Where's my Exit menu item?
(I know, I know, it's in Mozilla. Time to switch.)
worse than cruft? -- inability to pinpoint it (Score:3, Insightful)
It's called "auto-save". The feature already exists in most word-processors.
We have the technology. So why do we still make people save each of their documents, at least once, manually? Cruft.
Well, maybe we want to allow people the choice of whether or not the the work gets saved. So to make it less crufty for the user, should we auto-save a different document every time, and fill up the user's hard drive? Then the user doesn't have cruft anymore, but does have to look through dozens of similar documents in order to revert changes.
Interesting article. I like it, but the author doesn't appear to have a good concept of what cruft really should be.
Cruft or common controls.... (Score:3, Insightful)
The issue is not so much that that extra features have been added, but that the intent is not correctly communicated, or is inappropriate
For example, the WPS applied to the OS/2 desktop is a wonderous thing, one that people desire in other systems. When this file-viewing device is applied to files in general [eg DRIVES], the result is a nightmare. Drives is *not* one of os/2's better features.
Windows copied this feature into their shell, along with a network browser. Unlike the OS/2 one, these ones *can not be hidden*, especially without corrupting the operating system. [deleting Network Neighbourhood removes UNC support].
It's not that the "start menu" is totally bad either. It relies on an established practice of menus. So does the send-to [as a configurable context menu that allows drag-and-drop to otherwise hidden targets]. Folders = submenus. So you can have submenus in the send-to as well.
It's not that one can't make the windows shell liveable. Create a directory called grotto, and move these folders from Windows: sendto, start menu, desktop, shellnew, recent. You can create other folders there as well.
Create an icon with the command /e,/root=c:\grotto,start menu
explorer.exe
This gives you a super-program manager that you can fix your start menu, send-to, etc, as well as drag out recently edited docs for shortcuts to the desk.
The other issue of what happens whens when one closes dialogs (as to whether it's an OK or Cancel), frustrates users to no end.
The issue is not so much as Cruft, but the lack of consistancy. Were cars like this, they would be hazardous.
Smacks of Negropontification. (Score:5, Interesting)
As off-base as I think the author is, it's good to think this way. Even if it's not practical or better.
Save is an advantage, not an obstacle. The article's author limits the use of Save to things like Word-processing (immediately betraying his experience with more esoteric formats). As others will surely point out, Save can save you when something you're working on goes on a tangent. Besides, Word (and others) can AutoSave.
Launching/Quitting programs, while arguably cruft, has been accepted insofar as people do like the tool metaphor. You use a jigsaw to cut complicated shapes in wood. A screwdriver for attaching things. Photoshop for graphics, etc. Although I will admit Quit is getting a bit weird... esp. on modern systems like OS X, where there isn't that much of a reason to Quit things. I still do it out of habit.
Filenames are... well, filenames, and they don't seem to ever change. I don't really see a disadvantage with a 256-character filename. The dot-3 suffix is a bit of an anachronism, but it's a comfy one, one that gives the user a bit of metadata as well as the computer. Windows' behaviour of only recognizing files with certain suffixes in file dialogs by default has reinforced this.
I don't know what he means by the File Picker. I launch/pick stuff from the Finder all the time.
What I'd really like to see is a better representation of relationships between files. Something akin to The Brain, or another derivative of Xerox's Hyperbolic Tree structures. Radial menus, with branches running in various directions to the related objects/files, have been proven to be more effective than lists of data (there's something humans like about remembering angles as opposed to just the names). People themselves need a better representation, too. iChat has taken baby steps towards this, but really, ponder for a moment; why can you not see the heads of all your friends popping up on your desktop? Why is it that we have to 'browse' for other people, either through an AIM window (another list) or some such mechanism? If I get an email from Mike I want to see a mail icon next to Mike's head. I want to send files to Mike by dropping them onto his head. I want to see *everything* that is related to Mike at a click.
Also, to mention another pet peeve: themes. People love themes. People abuse themes. There is a need here that has never been addressed fully, IMHO - the problem is that people are dressing up the cosmetics of the interface while doing nothing to change the behaviour. It's sort of ridiculous to think that we can come up with a Swiss Army Knife interface that will be maximally productive for all conceivable computer tasks. I've actually taken to creating several different accounts on OS X, each with their own specialized settings. If I'm doing graphics work, I want everything to look like a light table; big shiny icon previews, sorted by meta data (time photo was taken, type of graphic, etc.) If I'm doing coding, I want list views or column views everywhere, and lots of reporting tools running on the desktop (bandwidth, terminal). There really should be interface schemas that can switch on-the-fly to whatever sort of task you are engaged in.
Does anyone else see the irony... (Score:5, Funny)
Save and Exit (Score:4, Insightful)
These commands are, IMO, actually examples of good interface design. The (unwritten) rule these commands are implementing is,
Let's say the "Save" command was automatic. Where are the files saved ? Under what name ? How often ? What if I made a mistake, and want to restore the old file -- is it possible ? How far back can I go ? Infinitely back ? What if I don't have infinite disk space ? Etc. etc. Instead of making a program that would try to solve these questions for all people and all applications at once, I can simply tell the user, "look, when you want to permanently persist your document, hit Save". This is a lot better in the long run than a program that would overwrite your files every so often because it feels like it.Similarly, the "Quit" command is useful. Without it, applications would just pile up on the screen and in RAM. When I am done with writing my letter, and want to play some Warcraft 3, I want to close MS Word and open WC3. It's very natural; just as when I am done working and want to go to the beach, I take off my suit and put on swimming trunks. If I could not quit any programs, they would pile up like layers upon layers of dirty clothes -- media players, web browsers, p2p programs, text editors, word processors, compilers, virtual machines, graphics editors... and that's just what I use before breakfast ! Yes, it would be nice if we had infinite CPU, RAM, disk space and screen space, but we don't, so the "Quit" command is the next best thing.
Note that, ironically, on single-threaded OSs, such as PalmOS, the Quit command is actually not neccessary. There, it makes a lot more sense to just save the state of the current program when you want to run something else, then restore the state when you reactivate the program. This only works because you can run one program at a time, and that's it, so there's no room for confusion.
Contrast the ease of use of the "Save" and "Quit" commands to other commands which have been implemented as automatic agents, just as the article suggests. MS Word's "auto-correct" (more like auto-confuse), Visual Studio's auto-complete (it knows best what function you want to call) and that damn paperclip all come to mind. My computer is not psychic (yet); it cannot sense what I want to do. If it tries to predict what I want to do ahead of time, it will fail and mess up.
Why Free Software Usability Tends To Suck (Score:3, Informative)
"Why Free Software Usability Tends To Suck" [phrasewise.com]
"Why Free Software Usability Tends To Suck Even More" [phrasewise.com].
They are an eye-opener for any one who has wondered why linux is still not ready for the desktop despite the prescense of so many talented programmers in the Free Software Community
TMTOWTDI applies to UI just like everything else (Score:3, Insightful)
I notice quite often that people who try to analyze the shortcomings of current UI's often make this kind of error because they are not aware of the diversity of needs that must be served.
For people who like to develop new, improved and consistent UI models, I would suggest that they also spend some time in describing the particular context and subset of computer users for which this model would apply.
Strongly disagree with the 'save' comment (Score:4, Insightful)
The problem with eliminating save and only having one version (the version that is currently open, which is reflected in the file on the disk) is that you eliminate a primitive sort of "versioning" where the saved document on disk represents an older version of the document. The "save" command becomes a sort of "push current version out" and "revert" becomes a sort of "roll back changes to last version."
Now I would happily eliminate the "save" menu from my programs, but only if we could replace it with a "mark version" and "rollback version" command which would allow me to maintain several versions of the same document. That is, I wouldn't mind if we created a word processor which saved the current version of the document to disk as it was typed, but only if I have the power to mark different document versions and either roll back to an older version or mark the contents as the current version in that file.
I strongly believe this is the reason why eliminating the "save" command was not accepted by MacOS usability testing when they were working on OpenDoc. OpenDoc eliminated file saving entirely, and users hated it--because users were using the "save" command and "revert" command as a sort of "commit changes/rollback changes" command--that is, as a primitive way of version control. And OpenDoc, by eliminating the save command, took that away from users.
Don't take away file version control; give me a more powerful version of it!
Re:pardon? (Score:4, Informative)
Re:MSVC++, VBasic, Dreamweaver.. (Score:2, Insightful)
I suspect the only reason it caught on was because the competition is has even more retarded behaviour (yes vi I'm looking at you).
the vi shuffle (Score:3, Funny)
Editing Perl with vi! Talk about a cruft implosion. To make things worse, I was using a very bad version of vi that came standard with Debian potato. It doesn't indicate on the screen that you are in insert mode. Certain kinds of cursor motion break insert mode when you least expect it. It doesn't even have a line number indicator on the status bar.
Aside from all the obvious reasons I've been trying to figure out why I hate vi as much as I do. I've put up with worse and complained less. Yet somehow as much as I try to accept vi for what it is I fail miserably.
I was paying special attention to my vi misery as I permuted Perl's line noise. Here's what it comes down to. If you have N characters on a line, there are N+1 positions where you might wish to insert a new character. Yet in vi you can't actually reach all these positions without first entering insert mode: the position that appends to the end of the line is not available. This leads to the ludicrous effect that whenever you cancel insert mode, your cursor moves one character to the left (unless you are already at the beginning of the line). Oh, and you can place yourself at the end of a line when you are not in insert mode--if the line happens to be completely empty.
So there I am getting slap happy with vi (banging escape whenever I forget what mode I'm in, which is almost always) and every time I bang escape I have to watch carefully to see if my cursor skids left.
It's bad enough having two modes. But did the concept of current position also have to be different between the modes? Incredible. Just for this one reason vi constantly gives me the feeling I'm babysitting a naughty child.
On the other hand, emacs might be barroque, but I rarely spend much time thinking about my hands unless I'm trying to do something that isn't habitual. I rarely use a feature in emacs I didn't learn in the first two days. "feature freeze" in emacs is modal in its own way. One moment you are working productively, the next moment (or hour, or day) you are whittling away at one billion options you don't want. But there's the difference: emacs is modal once an hour, vi changes modes faster than you can blink, or think.
There's a general rule (apparently unknown to the anticruft crusader who launched this topic) that cruft is eliminated only when something new comes along that's ten times better. Only ten times. And vi still exists. Amazing.
Re:the vi shuffle (Score:3, Insightful)
vi is configurable. Try ":set all" to see what you can play around with.
Use 'a' instead of 'i'. There is an append mode in vi.
It's bad enough having two modes.
For a pure console environment (no GUI, no mouse), vi's modes really aren't a bad thing. Command mode allows cut'n'paste, navigation, global search and replace, etc. You only enter new data in insert mode. Another thing about vi: its power isn't unleashed until you've read the 'ed' man page.
This is true. There has not been a programming environment invented that is ten times better than vi (or Emacs, to be fair). I have used Java IDEs, used Visual C++, etc., and all of them added only complexity to the development process. Complexity is a large project's worst enemy. And I'm talking about true complexity, here, not the apparent simplicity of an enormous GUI application like Visual C++. For example, when Visual C++ breaks...how do you know what went wrong? What if the binary build system goes awry? What if the VC plugin doesn't work right? What is the best way to keep binary project files under good change management? And so forth. Why add things to worry about, when vi, make, sccs, etc., do everything in a predictable managable way?
Re:MSVC++, VBasic, Dreamweaver.. (Score:3, Insightful)
There is no excuse for this. I suspect it boils down to a hardcore RTFM attitude in the Emacs community, determined to not make things easy for anyone, especially people who just want to edit a file. The 'Customize Emacs' option is a sick joke. Even XEmacs makes an effort, for example referring to 'syntax highlighting' rather than 'font locking' and other efforts to put some sense into the configuration system. WTF did someone get the name 'font locking' from??? Talk about obtuse.
Most modern editors have the good grace to stick the commonly used settings into a nicely grouped point and click dialog box. Emacs has a graphical mode so it should do likewise.
Re:Its not The Start Menu, its The Start BUTTON (Score:3, Insightful)
Why the heck do I have to move the mouse to one of the most obscure position, the lower left corner, just to start a program, which is easily the most used desktop operation of all?
Because the corners/edges of the screen are the easiest pixels to click on, with the exception of the pixel currently under the mouse. See Fitt's Law [asktog.com].
It's easier to click on the desktop if the mouse is already over the desktop. However, a lot of people work with apps using the full screen, so it's easier to use a start button.
Of course, this requires that your Start button actually extend to the corner of the screen. MS fumbled the ball on this originally, IIRC, by giving it a border (as they did with taskbar buttons), but they later fixed this (the border is still there visually, but now you can click on it). However, if you make the taskbar double height (as I normally do) then the Windows Start button inexplicably gets aligned with the top of the taskbar rather than the bottom, so slamming the the mouse to the bottom left and clicking has no effect.
I expect this positioning was considered to be more aesthetically pleasing. Sometimes Microsoft are very good at, as Sir Humphrey [imdb.com] once said, "Snatching defeat from the jaws of victory".
Tim
Re:Not dead, just ANCIENT (Score:3, Informative)
If the site was up-to-date, they'd be going after iCal 1.0, which came with Jaguar just lately. Tons of critiques on Macintouch [macintouch.com] of the immaturity of that interface. Apple's not perfect, but they take interface seriously, and the users' standards reflect that.