The Future & History of the User Interface 249
An anonymous reader writes "The Mac Observer is taking a look at UI development with lots of video links to some of the latest developments in user interfaces. It also has links to some of the most interesting historical footage of UI developments, here's one of the 1968 NLS demo. From the article: 'Sadly, a great many people in the computer field have a pathetic sense (or rather ignorance) of history. They are pompous and narcissistic enough to ignore the great contributions of past geniuses... It might be time to add a mandatory "History of Computers" class to the computer science curriculum so as to give new practitioners this much needed sense of history.'"
I'm outraged! (Score:5, Funny)
Crow T. Trollbot
Re:I'm outraged! (Score:2)
Re:I'm outraged! (Score:5, Interesting)
On the shitcan of history, like the unreadable [google.com] choice of default font on Slashdot, the Star Wars Galaxies NGE [joystiq.com], the changes to Yahoo's [thesanitycheck.com] stock message boards [gigaom.com], and two recent changes to Google Maps, one of which has made broke printing [google.com] impossible (users are now reduced to taking goddamn screen captures and printing those!), and and another one that auto zooms [google.com] and recenters, instead of merely re-centering the map, on double-click, making navigation a time-consuming process of setting a desired zoom level, clicking to recenter, slowly loading a bunch of tiles you don't need, then unzooming back out, and loading yet another set of tiles.
In each of these cases, user feedback was nearly universally negative, and yet the "improvements" remain in place.
If this is UI innovation for Web 2.0, give me Web 1.0 back.
Doom (Score:2)
Re:I'm outraged! (Score:3, Interesting)
Re:I'm outraged! (Score:2)
boy was I pissed when they changed his voice.
Multi-touch (Score:5, Interesting)
The thing that makes it different is how casual the interaction is compared to file & image programs today. You see the guy just touch the screen and rotate, zoom, and move images around and organize it, instead of opening up dialog boxes, secondary windows, or menus to access the functionality. It's very basic stuff, but you see how powerful it is, kind of like how Google Maps is compared to the old static kind of online maps.
It's like today's image programs are concerned with precicely doing something like zoom to exact levels(%100/%50/%33/etc), but this programs let you do it to "whatever zoom feels right", without worrying you with the details.
Hey speaking of which, I wish cameraphones had a much more fluid interface for picture organization, so I can add keywords, associate it with people on my contacts, etc... but what do they care, as long as they make money off the ringtones
Re:Multi-touch (Score:2, Interesting)
There you go
Don't forget to view the other TED talks!
Two mice. (Score:3, Insightful)
Re:Two mice. (Score:2, Interesting)
It probably hasn't been implemented yet because it would be quite confusing to keep up with which pointer does what. There isn't that problem with these displ
Re:Two mice. (Score:2)
Au contraire [icculus.org], my friend.
Works on:
Re:Two mice. (Score:4, Funny)
Because all innovation in the computer industry comes from the field of pornography.
Re:Multi-touch (Score:3, Interesting)
A lot of the limitations on the UI stems from the hardware we use to talk to the computer. The multitouch stuff is awesome, and if/when we see some hardware support, you'll start to see some very, very interesting new stuff.
As much as I hate 'media' keyboards, if they were just standardized I'd be very happy. I'd love to have several software-configurable scrollwheels and sliders. Universal out-of-the-box support for secondary/tertiary/n-ary small LCD displays would also be nice.
Re: (Score:2)
Re:Multi-touch (Score:4, Insightful)
My biggest gripe with today's computer interfaces is that attempting to funnel everything you might want to do through a mouse plus (if you're lucky) a keyboard forces you (as an interface designer) to make a difficult decision: either waste huge amounts of screen real-estate on functions you need to include, or hide them away.
What we need are interface devices that aren't so bandwidth limited. When we want to make the computer perform an action, all we are generally able to do is locate it on the screen and say "Do." On systems with multi-button mice the situation is somewhat better. Most Firefox users are familiar with "left click to follow a link, middle click to open it in another tab, and right click to get an [ick] context menu" idiom. Scroll wheels are another instance of a bandwidth-increasing addition to the system. Rather than clicking an arrow to scroll, we are now able to spin a wheel while pointing in the general area of the thing we wedant to scroll.
Some systems put the physical controls available to even better use. The Sam text editor, its successor Acme, and basically all of the Plan 9 operating utilize the mouse buttons to perform distinct and consistent actions. In Plan 9, button 1 selects, and the other two buttons, when used in conjunction with it, perform other useful actions. The exciting feature of this setup is that it moves the selection of possible actions out of the computer, where navigation is inefficient, and puts it literally under your fingertips. Rather than selecting an object on the screen then selecting an action, or vice versa, one can simply point at it and say "do this." The ability to convey specific actions in one fell swoop is what makes command line junkies (myself included) swing the way they do. What could be more exciting than marrying that power with a GUIs flexible expression?
An even more extreme example is the five button keyboard (for the left hand) + three button mouse featured in the Doug Engelbart video linked from the summary. I'm not sure how his system used them, but this setup allows for eight functions using the most obvious mapping, many more than modern interfaces. Not only that, but with chording, it's possible to increase the number of possible actions to a dizzying 255, which is probably way too many to actually make use of*: Engelbart's system uses typed commands as well as clicks rather than attempting to assign a meaning to each combination of button presses. One good way to cope with the number of possibilities is to assign a general funtion to each button, and to combine those functions to perform actions. For example, if the left mouse button selects text, the middle button pastes it, and the right mouse button cuts it, one can copy by selecting with LMB, then pressing RMB, MMB in succession. Other button actions might be "system" to trigger global system functions, "window" to do window management and "inspect" to look more closely at an item. What might happen when you press these together? Exposé, anyone? But without reaching for the keyboard.
Anyone interested in user interfaces should take a look at the Sketchpad computer program for starters, which was simply amazing, and at "Alan Kay: Graphical User Interfaces" on Google Video. GUIs have a rich history that is not evident in modern interfaces.
* This is a good thing! Even when a the system allocates a set of global button presses and applications implement their set of commands, there will still by plenty left over for the rest of us to allocate as we see fit. This is the one point where I disagree with Bunions: I love that multimedia keyboards aren't standardized, because it means that programs don't depend on the buttons they provide, which in turn gives me 32 keys (on the keyboard I have) that I can bind to any action I want, without losing any functionality.
Re:Multi-touch (Score:2, Interesting)
Re:Multi-touch (Score:2)
Re:Multi-touch (Score:4, Interesting)
Computers have a few things they do well: accept textual input, display data on big screens, and multi-task. From those it follows that anything graphical or textual is a good fit, and that while "one device to do everything" is a bad idea, one device that does many things that it happens to be good at is a great idea. For example, it's extremely common that a person wants to access the web while working on a project. It's better to have one device that can help you gracefully juggle everything you're trying to do than to have a typewriter, a web browser, a CD player, your clock, a "download machine" and a telegraph key (for IMing), each with its own chassis, competing for desk space. Up to a point, combining functions makes sense.
Computers also do one thing very badly: they don't accept input from anything other than a keyboard very well. Specialized fields do have devices that work well: graphics tablets for graphics artists and MIDI keyboards for composers, for example. The driving force behind multi-touch displays is that the "interface for the rest of us", the mouse, is a difficult and inefficient thing to use. We all have ten built-in pointing devices which we can use with aplomb -- some people even manage to use their toes as a few more -- and multi-touch displays are a way to make use of those. Much as I dislike the desktop metaphor, I must invoke it here: using a mouse to interact with a computer is akin to using a single stick to push your papers around on your desk. It's just not the best way to go about it.
I very much doubt that it would be stressful to use a multi touch display for a long time. In fact, I suspect that it would be much less stressful than making the constrained motions required by a mouse. Joints are *made* to move. It might still be a little exhausting at first.
I agree that UIs are best when they are simple... but simple is in the eye of the beholder. To me, a UI that allows me to use my skills in a direct way is a simple one. Using my fingers to move on-screen objects = simple to me. A complex UI is one that requires me to perform actions in ways that take more effort than the direct way. The direct way is the way I would do it if I were manipulating physical objects. For example, menus (especially nested ones) and window managers that don't (i.e. I have to drag and position windows myself, when the window *manager* should do it) are complex to me. Above all, attempting to convey a huge variety of instructions by pushing a box around and clicking buttons on it is complex, because it adds another layer I have to work through. If I ever my my GUI fairy godmother, I'm asking her for a laptop with a touch screen. Maybe a multi-touch screen if she looks generous.
Touchscreens and desktops don't mix. (Score:3, Insightful)
But for any other general-purpose computer, the touchscreen lost out long ago. There were a number of touchscreen monitors for sale in the 90s, all the way to today, but they never made inroads over the mouse. The problem is two-fold:
1. people don't like raising their arms to horizontal and manipulating a screen while seated. It is an unnatural position.
Creaky and old fashioned? How about useful. (Score:5, Interesting)
FTA: The current state-of-the-art User Interface (UI) we've been enjoying has remained largely stagnant since the 1980s. The greatest innovation that has been recently released is based on video card layering/buffering techniques like Apple's Expose. But, there is a large change coming. Rev 2 of the UI will be based on multiple gestures and more directly involve human interaction. Apple is clearly working in the area as some of the company's patent filings demonstrate. Nevertheless, these videos might make Mac (and Windows) users experience a huge case of UI envy, as a lot of UI development (in XGL in particular) makes the current Mac UI seem creaky and old fashioned.
The guy seems to think that the stagnation of the UI is an entirely bad thing. It seems to me that when something works well, people like to stick to it. I really don't think the majority of people need multiple desktops floating around let alone a brain interface. The only widely practical new UI technology I saw was multi-touch interactive displays (or touch screens in general, though they have been around for a long time and are still not very popular). As far as his comment that the new-fangled UIs make the Mac seem creaky and old, well, that's his opinion I guess. Some would just say the Mac UI is useful as it is. Even some of the new features in Leopard seem unnecessary to me. It's never bad to innovate, just don't automatically assume every new cool thing is practical or useful for most people.
Re:Creaky and old fashioned? How about useful. (Score:3, Interesting)
What I am still waiting for is multi-pointer capable x11 (two mouses) and pressure-sensing mouse buttons.
Re:Creaky and old fashioned? How about useful. (Score:2, Informative)
http://wearables.unisa.edu.au/mpx/ [unisa.edu.au]
Re:Creaky and old fashioned? How about useful. (Score:3, Funny)
I'm sorry. "Klumsy" is a trademarked adjective of a different desktop environment. You have been warned.
Assuming that I won the lottery tomorrow... (Score:4, Interesting)
I've always been fascinated by HCI but have yet to be able to pursue this in a work-related setting (where I tend to write backend code, basically as far away from users as you could possibly get).
Re:Assuming that I won the lottery tomorrow... (Score:2, Insightful)
Re: (Score:2)
Re:Assuming that I won the lottery tomorrow... (Score:3, Interesting)
We have some human-machine interaction specialists where I work. I know their educational backgrounds are varied, but I'm not sure what the basic requirements are.
We make military aircraft, so they are concerned not only with the computer interaction in the cockpit, but also with the positions, labels, and feel of switches, knobs, controls, instruments, ejection buttons, etc. For some reason quick and reliable person-machine interaction is considered important when people are shooting at you. (Haven't we
Re:Assuming that I won the lottery tomorrow... (Score:2)
Still it seems unlikely to find a job in the field without some sort of accreditation from somewhere, that unless you are in the right place at the right time.
Re:Assuming that I won the lottery tomorrow... (Score:2, Insightful)
Todd
Re:Assuming that I won the lottery tomorrow... (Score:2)
Personally, I'd be opening a design atelier looking at all aspects of design.
If you haven't already read it, look up a copy of Donald Norman's [wikipedia.org] "The Design of Everyday Things" (formally known as "The Psychology of Everyday Things"), it's a real eye opener about the visual and contextual clues we use to determine how an object 'should' work, and how often these clues lead us astray.
Overlapping windows (Score:5, Insightful)
Some obvious trivial faults:
For reference, just look at your screen now, and watch how much of it is covered by empty "gray areas". When you open a new window, does it hide gray areas, or real information?
This is even more absurd when there are just a couple of windows, hiding each other, when the entire screen is free space! The computer expects YOU to work for HIM and move these windows from hiding each other.
This phonemenon is also felt in list boxes, where you are expected to adjust the column widths manually to not be too short/too long, even when there is an optimal adjustment readily available. You again have to work for the computer, and ask for a ctrl+plus to set it up. Most people don't even know about ctrl+plus in column-listboxes.
Some programs make it even worse, and don't let you resize their windows when the entire screen is free, and you have to scroll through their data in a little window.
What's so fascinating about this example - is how common it is across platforms, programs, operating systems.
The feature is called "shortcut keys" and yet everyone is implementing it as "shortcut symbols".
This is terrible - when you switch between languages, all shortcut keys break!
The fact that fixing this would require modifications of all existing GUI programs is a certificate of poor architecture of GUI software.
There are many more trivial issues to fix. Until they fix these, I find it very funny to talk about future directions for the User Interface. We haven't even gotten the basics right yet!
Re:Overlapping windows (Score:2)
Re:Overlapping windows (Score:3, Interesting)
1) Your points on overlapping windows is interesting. But KDE already addresses that. When I open a new window in KDE, it opens the new window over the area of greatest unused space. Overlapping continues, but as unobstructively as possible. Contrast that with Windows' means of opening windows about 1/4" below and to the right of the previously opened window, which almost assuredly wastes as much screen real estate as is possible.
2) C
Re:Overlapping windows (Score:2)
And that flip takes zero time? Last time I read a book, I could actually see the page being turned.
The problem with this sort of animation is that they're generally not fast enough to be unobtrusive. I don't need a 2-second animation of a page turn, I need a half-second one.
Re: (Score:2)
If I'm reading a book, I'm more interested in the content I'm reading rather than the animations of page turning.
It's like those TV/interstitial ads. The good ones might be cool the first time you see them, but after a few times, don't you want to be able to skip them and get on with whatever you were doing?
Well, I guess I must be the strange one around.
Re: (Score:2)
as visual feedback to confirm you actually pressed the 'next page' button. The physical world almost always gives us feedback when we perform an action - in the case of a page turning, a visual, audible and tactile one. Also an olfactory one if the book is musty. Your brain is used to that, so computers should try to reinforce that.
>If I'm reading a book, I'm more interested in the content
> I'm reading rather than the ani
Re: (Score:2)
i don't think it necessarily has to do with the speed or existence of a "page turn" animation. the only efforts i've seen to simulate a page turn in digital form seem like useless eye candy. the content changing should be enough of a visual clue. the main benefit of physical pages is that you have an immediate reference point for whe
Re: (Score:2)
I agree 100%. Jumpiness makes sense. Computer, and the programs and data that inhabit them, are not physical objects and do not behave like physical objects. UI gurus have been declaring for years that normal human beings cannot accept this and will never understand anything that was not familiar to humans on the savannahs of Africa 100,000 years ago.
Pointing out that a UI behavior has no counterpart in the "real" world says nothing about whether it will be difficult fo
Re: (Score:2)
I just tested this with IE7 and Windows XP. I opened a new IE-window, and it opened on top of the old window, so that just the titlebar of the old window was visible. I moved the new window to another location and closed it. Then I opened a new IE-window. Did it open in the location where I closed the previous window? No. It opened (again) on top of the old window.
Maybe the initial IE-window opens where the previous
Re:Overlapping windows (Score:3, Insightful)
Why make the assumption that you always want to see all of the information in all of the windows you have open? Just because a window is visible doesn't mean it is relevant to the current (and ever-changing) task.
Right now I'm typing a comment on Slashdot, but my mail client is open behind this window because I was reading email just a few minu
Re:Overlapping windows (Score:5, Insightful)
I disagree.
Overlapping windows are used to make more information available to the user than can be displayed on the available screen real estate. The RL metaphor is a collection of papers on a desk. You can't see every paper all at once, but you bring to the top of the pile those which you need. You do this for your own benefit, based on the needs of the moment, not for that of the desk -- or the computer. The whole point is that the space isn't tiled. I don't like working that way personally, and I suspect the reason we've moved away from that model is because most people don't. Remember the early Windows versions?
You asked how much of the screen was empty space and therefore wasted? Very little of it, most likely. Very little of mine is as I type. Space with no content in it is not necessarily wasted. In fact, it most likely isn't. Space is crucial to how our brains orgainize what we see. If every square inch of space on the screen was being used, we'd see it as a jumbled mess. The best and most eye-pleasing data presentation use of designs very carefully balance empty space against that occupied by content. Take, for example, your original post against my reply. See how I create spaces between my paragraphs with properly structured P tags? See how much more readable that is?
I agree that some programs are badly designed and make poor use of the model. That doesn't mean the model itself is broken.
Yes, it would be nice for those very particular about their screen arrangements if they could save state between sessions and recover it immediately when they start back up again. This is an implementation issue 00 remembering, of course, that most people prefer not to tile.
Re:Overlapping windows (Score:2)
Re:Overlapping windows (Score:3, Interesting)
I think the throuble with tiling is that it simply doesn't work that well as a generic concept, there are simply to much applications around that are just to small to make sense in a tiled workspace, ie. a small calculator [toastytech.com] should overlap, not tile, since else he can't be seen in full and wastes a lot of screenspace. However in Blender or Emac
Re:Overlapping windows (Score:2)
Thought about this with fast forwarding on a DVR (which always seems to be the wrong speed), but would work just as well on a keyboard or mouse.
As for smoothscrolling in current tech, the best I've found is just middleclicking in a browser/some documents and pulling the cursor away from that icon it creates. Further away, the faster it scrolls. It'
Re:Overlapping windows (Score:2)
Re:Overlapping windows (Score:2)
On 2 -- there's no reason for shortcuts to be the same across languages, and I don't believe it's generally the case that they are. At least not the way one normally does this kind of thing in Motif; I don't work in Windows much so I don't know. These are not programmed in, but exist in a separate resource file. But I thought that was his exact complaint. makes little sense for "open" in a language where that word doesn't begin with O. So if you switch from English to Portugese, your is now (or should be)
Re:Overlapping windows (Score:2)
In addition, for the products I've worked on, translation has
Re: (Score:3, Interesting)
You make some interesting suggestions, but there are practical issues.
Take your example of a stovetop. Your observation is a fairly obvious one, but the obstacle is that there isn't room for the knobs to be placed in a square, due to other overriding design considerations. That's why almost no one does that. There is instead a conventional arrangement that is the same from stove to stove, so you only have to learn it once.
That's one reason the current arrangement of shortcuts would be difficult to chang
Smoothwheel (Score:3, Informative)
Re:Overlapping windows (Score:2)
So it's you against The World on the subject of how a UI ought to work. Hmm, I wonder who is more likely to be right.
1. The computer has no way of knowing, other than via user input, which information is important, and needs to be made viewable. I actually prefer to manually set the size and placement of my windows exactly the
Re:Overlapping windows (Score:2)
My keyboard layout includes a Compose key for typing weird caacs on the keyboard, and in my main program it's able to remote-control my mp3 player app. Play, back, nest, pause, stop, in that order.
And my taskbar features a single-click run application button, current outdoor temperature, date and time.
btw, I don't find overlapping windows intuitive at all except for file manag
Re:Overlapping windows (Score:2)
Especially annoying are those apps that spew multiple toolbars and palettes in several child windows which constantly get in the way (as in most paint programs).
Re:Overlapping windows (Score:2)
But 4? I prefer jumpiness. When I want something to happen, I want the computer to do it NOW, not do some silly animation before it does it.
If I want a smooth scroll, I'd be holding down the scrollbar (great invention that one) and dragging it. But if I want a "page down", I want it to go a page down NOW! If I want to s
The Future is easy to predict here. (Score:5, Insightful)
In the short term, we'll see Longhorn slowly and sloppily copy whatever Apple's doing; and we'll see KDE and Gnome both copying the bad parts of what the Gnome and KDE are doing respectively; and we'll see all real computer users using emacs/vi/pine/xterm/screen like they always did.
Re:The Future is easy to predict here. (Score:2)
Come to think of it, I can do that today! [slashdot.org]
The Future Is Now!
Re:The Future is easy to predict here. (Score:2)
I am not so sure about that, for some things of course voice and gestures are great, but the computer isn't just a dog or a coworker, its also a tool and I neither talk or gesticulate to my screwdriver, instead I pick it up and get the job done with it myself, since thats simply a lot faster then trying to explain what and how so
but (Score:2)
Re:but (Score:2)
How do I tell if the screw needs tightening at all? How does the computer figure out if I want it to go in or out? How does the computer tell me how tight it is? How to I pick a screw? If I just pick that screwdriver and start working all that information is easily available. Of course an automatic screwdriver might work better then a manual one and even with mouse and keyboard
Re:The Future is easy to predict here. (Score:2)
I'd be perfectly happy to replace my current functionality on the mousewheel for zoom.
Virtual screens are available through other motions, anyway.
Zoom the desktop when not pointing at a program, and a key to hold down to make it all zoom
while in a program.
One of the coolest things... (Score:3, Informative)
Re:One of the coolest things... (Score:2)
Re:One of the coolest things... (Score:3, Informative)
Re:One of the coolest things... (Score:2)
There were some of them [cam.ac.uk] in EDSAC [wikipedia.org] back in 1949.
From reminiscences [cam.ac.uk]:
Re:One of the coolest things... (Score:2)
Nobody's paying attention (Score:5, Insightful)
At least not to common consumer devices. I cannot even count the number of remote controls, microwaves, cellphones, dishwashers, ATMs, and other devices which are seem to be designed completely without thought for the human who will need to use them.
Remote controls - ever heard of making the buttons distinguishable by FEEL, so I don't have to look down to tell whether I'm going to change the volume or accidentally change the channel or stop recording?
Microwaves - make the buttons we use all the time bigger and obvious. I can't use my microwave oven in near dark because the stupid thing's start button is indistinguishable from the power level button. That's just dumb. I don't need two different buttons that say "Fresh vegetable" and "Frozen vegetable" which I never use; and I have to babysit the popcorn anyway, so I don't need a "popcorn" button hardcoded for some random time limit. A microwave should have a keypad for entering time and bigger buttons labeled +1minute, +10seconds, ON, and OFF. That's all 99% of people use anyway.
The people who design interfaces should be made to use them for long enough so that they work out at least the most obvious design flaws.
I keep putting off buying a new cellphone because I know I will have to learn a new interface even to set the freaking alarm clock and it will probably take six menu choices to do it.
Re:Nobody's paying attention (Score:3, Interesting)
1. The gas pump that once you pick up the pump the prices disappear asking you to "Select Product."
2. The ATM that the button that you used to press "Withdrawl" on the next screen would withdraw $200. Shouldn't that go to the smallest amount or a "Go Back" button?
Re:Nobody's paying attention (Score:2)
I haven't seen a remote control where the buttons weren't easily navigatable by feel in years.
On my microwave the 1 minute, start are distinguishable from each other. Not in the dark, but who microwaves in the dark?
"The people who design interfaces should be made to use them for long enough so that they work out at least the most obvious design flaws."
the more you use it, the more intuitive it starts to seem to be.
They could use
Re:Nobody's paying attention (Score:2)
One doesn't even need to look at all thoes high tech products to find bad user interface design, even something as simple as a door can be done extremly bad. Ever tried to push one that you needed to pull thanks to the fact that both side actually look the sam
Re:Nobody's paying attention (Score:3, Insightful)
computers can be very annoying. Get a nice, cheap Korean microwave
Re:Nobody's paying attention (Score:2)
Re:Nobody's paying attention (Score:2)
It's so simple that a
Re: (Score:2)
How'd a completely blind person cope with that one? Say they want 2 minutes, if they spin it faster than normal they might get 3 minutes instead.
I think same angle = same time would be better for blind people.
Re:Nobody's paying attention (Score:4, Insightful)
Better question: WHY THE HELL ARE MICROWAVES DIGITAL? What part of "close the door and turn the dial" was so hard for people to understand, and how did typing in digits help? Microwaves aren't phones.
Was it the extra precision? People need to be sure they are microwaving their sandwich for exactly 2 minutes and 45 seconds, and ABSOLUTELY NOT 2 minutes and 46 seconds?
Are there a lot of people out there with only one finger, who find it faster and easier to type in 1-0-0-0-Start rather than turning the dial a quarter turn to "10m"?
What in the world makes people believe replacing analog with digital is the answer to absolutely everything?
Intuitiveness (Score:4, Interesting)
self study as elective was denied (Score:4, Interesting)
During my studies I proposed multiple times to do an independent study of the history of the computer field to count for 3 credits of my general electives. I was denied every time, even with support from the head the Engineering department. The liberal arts department continually stated that the purpose of the electives is to gain breath in knowledge. I finally took a (very interesting) class on Greek mythology.
I agree with the premise of increasing knowledge, but not the implementation. The college should encourage independent research when a student can blend his primary interests to meet a "credit based requirement".
What are your thoughts?
Understanding history of your profession should be as important as understanding your culture and your history. Your profession will become a part of who you are as well! Without context, you're clueless.
Re:self study as elective was denied (Score:2)
They were right.
But there should be a history of technical advances in the computer cirr.
Not a study of dates but a study of what was done and why. as well as a chance for students to use tghe older UIs.
MUD and MMRPG players know ... (Score:4, Interesting)
Sorry, I've got work to do... (Score:2, Insightful)
history of computing part 1 (Score:3, Funny)
Oh please no.
I had a mandatory Computers class in 6th grade (and again in 7th and 8th grade, with the exact same lesson plan). Half of this class was rudimentary BASIC programming on a room full of TRS-80s, the ones with the integrated green monochrome displays--and this was circa 1990.
The other half of the class was a purported history of computing, the key facts of which I can still recite today (learning the same thing thrice causes it to stick). These facts are:
- Charles Babbage made a mechanical computer.
- Then there were the UNIVAC and the ENIAC.
- The term "bug" is due to an actual bug Ada Lovelace found inside a computer.
- There are four kinds of computer: supercomputer, mainframe, minicomputer, and microcomputer.
- RAM stands for "random access memory"; ROM stands for "read only memory".
- Cray supercomputers are cool-looking.
- 10 PRINT "FART!!! "
- 20 GOTO 10
- RUN
but you get it wrong. (Score:3, Informative)
uit was found by Rear Admiral Grace Murray Hopper, USNR, (1906-1992)
http://www.maxmon.com/1945ad.htm [maxmon.com]
http://www.history.navy.mil/photos/pers-us/uspers- h/g-hoppr.htm [navy.mil]
http://www.history.navy.mil/photos/images/h96000/h 96566kc.htm [navy.mil]
she was an excellent speaker who could make anybody understand anything, a real gift.
Even the most elementary exercise with your brain would ahve allowed you to figure why it couldn't have been Ada Lovelace.
Re:but you get it wrong. (Score:3, Informative)
Edison used the term quite a bit. In fact, it goes all the way back to Shakespeare.
Re:but you get it wrong. (Score:2)
Re: Bug (Score:2)
Wow. I looked it up on the OED and didn't see the information you're talking about. I suppose I could keep searching... care to share a source? I'd like to learn the etymology of "bug."
Thanks!
Whatever the next UI is, it won't be "intuitive" (Score:5, Interesting)
He pointed out that "the only intuitive user interface is a nipple."
Several days ago my wife and I had a new son, so of course I watched them learn (together) how to breastfeed. It was not obvious to either one of them how to make it work -- they had to explore and figure it out together.
It appears that Jef was wrong: even nipples are not an intuitive user interface.
Re:Whatever the next UI is, it won't be "intuitive (Score:2)
I never culd figure out if he was wrong, are a genius.
Meaning, even the most seemingly intuitive interface has a learning curve.
I hope things worked out between your wife and son. It can be an extrememly fustrating thing for a woman.
Good luck!
Re:Whatever the next UI is, it won't be "intuitive (Score:2)
Jef was that rare jewel -- a visionary who is willing to admit mistakes. The world got a little poorer when he passed away.
Intuitive User Interface (Score:2)
Re:Intuitive User Interface (Score:3, Informative)
Why, yes, I can: societal training. In Bulgaria the opposite gestures apply. In Turkey, "yes" is a back-and-forth shake and "no" is a sort of head-rearing gesture. Don't trust me -- trust Cecil Adams [straightdope.com]...
Re:Whatever the next UI is, it won't be "intuitive (Score:3, Funny)
Re:Whatever the next UI is, it won't be "intuitive (Score:3, Insightful)
But for a breastfed child a nipple on a bottle is an intuitive interface.
Hmmm... Project Looking Glass? (Score:2)
"Sample Augmentation System" (Score:2)
Tired of Eyecandy... (Score:2)
The useful features from OS X that people find useful, like a visual cue as to where a window is being iconified to, can and have been done in much faster/simpler ways. For as long as I can remember, Afterstep has drawn an outline of windows being iconofied, and quickly shows the outline spiraling down to, and shrinking into the icon.
Why is the rest of the GUI stagnating? Keyboard shortcuts are extremely primitive at best
apply it to a calculator... (Score:4, Insightful)
"Right for the Job" is the key phrase.
There are three primary UIs:
the command line (CLI)
the Graphical User Interface (GUI)
and the side door port used to tie functionality together. known by many different names, but in essence an Inter Process Communication Port (IPC)
Together they are like the primary colors of light or paint, take away one and you greatly limit what the user can do for themselves,
But if they are standardized with the recognition of abstraction physics (in essence what a computer impliments) then the user would be able to create specifically what they need for the job they do via understanding and applying abstraction physics. The analogy would be mathmatics and the hindu-arabic decimal system in comparison to the more limited roman numeral system.
There are all sorts of user interfaces that can be created but they all are made up of some combination of the primary three, perhaps lower down on the abstraction ladder but none the less there.
The reason why this is unavoidable is simple due to the nature of programming.
Programming is the act of automating some complexity, typically made up of earlier created automations (machine language - 0's and 1's is first level abstraction - all above it is an automation). The purpose of automating some complexity is tocreate an easier to use and reuse interface for that complexity. And we all build upon what those before us have created. Its a human unique characteristic that make its our natural right and duty to apply.
What the failure of so called computer science is guilty of is distraction by the money carrot, starting with IBM and wartime code cracking paid for by government/tax payers.
This distraction has avoided genuine computer science, or abstraction physics as it would be far more accurate in description.
Abstraction physics to the creation and manipulation of abstractions as mathmatics is a creation and manipulation of numbers, as physics and chemistry is a creation and manipulation of elements existing in physical reality.
With the primary three colors of paint you can paint anything you want, but you cannot call a painting "the painting" any more than you can call a mathmatical result mathmatics. Nor can you call some interface built upon the primary UIs the silver bullet of UI's.
All this will become much more clear, common and even second nature once we all get past the foolish fraudlent idea that software is patentable.
A roman numeral accountant, in defending his vested interest in math with roman numerals, promoted that only a fool would think nothing could have value (re: the zero place holder in the hindu arabic decimal system.)
No, no, user interface design isn't about gimmicks (Score:3, Insightful)
Apple, in its early days, had a good sense of what was important in a user interface, and that was expressed in the "Apple Human Interface Guidelines". Much of that knowledge has been lost.
One of the original Apple rules was "You should never have to tell the computer something it already knows". Consistently applying this rule requires a clear separation between infomration about the host environment and individual user preferences, something most programs don't do well. Apple was reasonably faithful to that rule in their early days, but over time, got sloppy. Microsoft never did as well, and it was an alien concept in the UNIX world.
It's common, but wrong, to bind environment decisions at program install time, which means that a change in the environment breaks applications in mysterious ways. The whole concept of "installers" is flawed, when you think about it. You should just put the application somewhere, and when launched, it adapts to the environment, hopefully not taking very long if nothing has changed. That was the original MacOS concept.
Much of the trouble comes from failing to distinguish between primary and derived sources for information. "This program understands .odf format" is primary information, and should be permanently associated with the program itself and readable by other programs. ".odf documents can be opened with any of the following programs" is derived information, and should be cached and invalidated based on the primary information. "I would prefer to open .odf documents with OpenOffice" is a user preference. None of the mainstream operating systems quite get this right. That kind of thing is the frontier in user interfaces, not eye candy.
nice (Score:2)
Will there be anything that can do better then bash by adding extra graphical whizziness? Thus far all I've seen is that bash can be wobbled, which isn't an improvement. GUI improvements are nice to see mind. When they're aimed aat aiding physically disabled pe
Re:In other news... (Score:3, Funny)
That's easy. It's at c:/>Files\Home\Photos\1997\Family\Snaps\*.jpg
duh.
Not tame users, tame designers. (Score:3, Interesting)
In essence I'm agreeing with you; there certainly haven't been very many really radical designs, and because of that
Re: (Score:2)
Unless Microsoft breaks the idiom by having the tabs on the multi-row tabbed interfaces dynamically move around to put the active row at the bottom. If you switch to a different row to check something, and want to go back to where you were before, you have to keep track of the fact that the row ordering has magically changed. Here's a hint guys: if the window has more than 1 row of tabs, it can be separated into 2 wi
Re: (Score:2)