Why Software Sucks, And Can Something Be Done About It? 498
CPNABEND tipped us to a story carried on the Fox News site, pointing out that a lot of programmers don't understand their users. David Platt, author of the new book 'Why Software Sucks ... And What You Can Do About It', looks at the end user experience with end user eyes. While technically inclined individuals tend to want control, Platt argues, most people just want something that works. On the other hand, the article also cites David Thomas, executive director of the Software & Information Industry Association. His opinion: Users don't know what they want. From the article: "'You don't want your customers to design your product,' he said. 'They're really bad at it.' As more and more software becomes Internet-based, he said, companies can more easily monitor their users' experiences and improve their programs with frequent updates. They have a financial incentive to do so, since more consumer traffic results in higher subscription or advertising revenues." Where does your opinion lay? Should software 'just work', or are users too lazy?
one example of too many (Score:5, Insightful)
One example I encounter almost every day is the notion of a computer's "state". People just want to turn something off and on, not easily abstracted for computers.
So, there is this myriad combination of "states", not too complex for slashdotters to understand but off the scale for lay users. It doesn't help we use "our" terminology. I've stopped trying to explain and describe the difference between "hibernate" and "standby".
Files, directories, logical drives..., all foreign and abstract curiosities to computer users -- most are technical artifacts from early on abstractions. It's not a wonder these lexicons ripple out the the general population, unfortunately it's of no use to the general users and mostly to their detriment.
I don't know how to get there, but users/people want computers to behave like toasters. They want very simple, limited-option and intuitive behaviors. Not all software lends itself to those but I think there is a much happier in between, and the group that can move is the programming group. I don't think the general population will ever educate itself about the differences between relational/hierarchical databases, the differences between NTFS and VFAT file systems, nor do I think they should be asked to know.
The closest I've seen to getting "there" in computers is probably Apple... I've seen novices sit in front of Apples and almost immediately be able to be productive.
The second closed I've seen is Unix/Linux, etc... not so much because of it's ease-of-use, but because it's one of the most consistent "flavors" of computing I've experienced (NOTE: I'm not discounting the complexity of Unix, it's certainly not for novices, but at least it's consistent).
One of the most popular applications I've written was one where the interaction with the user was basically a singly input field, a la Google. Users would instinctively type anything in the input field, and the application would do a pretty decent job of offering meaningful results. Analysis of logs showed users typically received meaningful results from their "input" 80 - 90% of the time. Granted it was a narrowly defined application, but I've seen indecipherable interfaces on top of narrowly defined applications.
The best general computing out there is something I'd predicted long ago, devices that are for narrowly defined and specific use with high powered computers underlying the gadgetry transparently (think TomTom (gps), ipod (no, I'm not a fanboy), etc.)
Ironically, or perhaps paradoxically, the most dominant technology available is the least intuitive to just sit down in front of and use. Of course, there is a latest and greatest new version out this year that should fix all of that. .
Bottom line, my opinion, users are not lazy, they just want to get some work done without needing the equivalent of a Bachelor's in Computer Science to get that work done.
Let's draw back... (Score:5, Insightful)
If people are bad at figuring out what they want from a computer, and terrible at designing (which, yes, they are) then maybe the problem is that the computer sucks. General-purpose computing is best left in the hands of experts. That model worked for 20-mumble years, and it was a good one. It still is, if you need to get industrial-grade stuff done.
But "personal computers," to be distinguished from "desktop computers," are a bust. Ordinary people can't deal with the complexity, and attempts to make computers act like a friendly thingy with stuff on it all fail because the computer isn't a friendly thingy with stuff on it. It's a computer.
People need, say, the Pure-Digital video camera that lets you take digital video with one button, has no memory cards, and runs on aa batteries. They need the microwave oven with the popcorn button. They need the car with a computer in it so they don't have to know when to use the choke. Special, optimized uses of computers work great for ordinary people.
People aren't stupid, they just don't act like a computer. Maybe there's a lesson there.
Apple gets it right. (Score:5, Insightful)
I've been a software developer for near a decade. There's two extremes to this, ignoring your customer, and letting them run the development, both are bad. The best path is to have some intelligent people in your company that sit in between customers and clients and act as a translation layer. Throw out the ideas you can't implement, give them the good ones. These people have to be at least partially developers themselves, they serve as architects as well as PR.
Customer Ideas -> Architects -> Code Monkeys
Fine, not lazy (Score:3, Insightful)
Computing -- especially in a *globally networked environment -- is *in *fact complicated. Doing it responsibly, in a way that doesn't wreck the environment for others (Cf. botnets) is difficult. Many of the users who "just want to get some work done" outsource the complexity, but don't mind if the network suffers the externalities because they don't feel like learning what true security requires.
If someone doesn't want to learn to drive, they have public transportation and taxis available to them and God bless 'em. But taxis and buses don't damage the roadways and the other vehicles on it during ordinary use.
Basically I sometimes wonder whether putting a PC in every home was such a hot idea after all.
Comment removed (Score:2, Insightful)
Users don't make buying decisions (Score:5, Insightful)
In most cases in business, users aren't the ones making software buying decisions. The organization makes choices for them based on a number of factors. There's no conspiracy against usability, it just has to compete with cost, features, regulatory compliance, and other considerations. Software developers naturally target the criteria that drive purchase decisions, even if the result is a compromised user experience.
Re:one example of too many (Score:5, Insightful)
Well said, yagu. For a good illustration of the truth of what you've written, try teaching a Computer Literacy class for adults who have never used a computer before. I got questions like "what's a mode?" and "why are these little arrow keys for?". If normal humans -- the kind who don't read Slashdot -- have trouble with concepts like modes and arrow keys, you can imagine how difficult it was for them to understand that, when their Word document disappeared from the screen when they minimized the window, it did not also disappear from "the computer", but was sitting somewhere invisible to them.
I think it would serve every programmer well to spend some time teaching novices how to use something the programmer finds simple, such as the Windows calculator, Notepad, etc., to see how "normal" users think and react.
More FOX anti-intellectualism (Score:0, Insightful)
CPNABEND tipped us to a story carried on the Fox News site, pointing out that a lot of programmers don't understand their users
Gee, an article by FOX News stating that eggheads don't really know what they hell they are doing. How completely out of character for them to bash the scientists and engineers that keep this country from completely collapsing.
This is just a little bit crazy. (Score:5, Insightful)
For instance, the "Save" button. He argues that a statement that says "Do you want to save your changes before you exit" is a hard sentence, and that "Do you want to throw away everything you just did" is a clearer sentence.
The word "save" isn't that hard of a word to grasp. People save money. People save possessions. Saving documents is no different. Grade schoolers understand it.
What really cracks me up, though, is that he argues that when deleting documents, there should be *no* confirm. I've had a few times when that windfall was really helpful, when I've accidentally hit the delete button or selected delete, and then said "No, I don't really want to delete this file." He compares it to starting a car, where the car doesn't ask you if you want to start the car or not. This is a horrible analogy: the last time I checked, turning a key didn't do something as devestating as, say, deleting your car.
I deal with end users every day, and I've had many of them admit that they don't read error messages or confirm dialogues. If they don't read it, what difference does it make what's included in the dialogue? I've made messages that were very easy, simple to read and understand, only to have them overlooked.
Next, the author mentions that error messages need to state *why* something failed. Wait a second... I thought he was just arguing for simpler error messages, but now he wants to know specifically what happened? That's not exactly simplifying things for the end user.
Now, I'm not saying that it's all the fault of the end users. There are some rather atrocious error messages out there, but it'd be safe to say that there are more end users out there that don't read things carefully. Computers are a tool, not a replacement for thinking, and users need to know that in order to get the maximum use out of technology.
User Centered Not User Designed (Score:5, Insightful)
RANT: Designing good, easy to use software is not as hard as many people to think, although writing it is harder than what most people do now. User's are not good at designing software, but only the user knows what they want to do and how they want to do it. This should be the beginning of the UI design. "What does the user need to do, and how can they do it most effectively." This should be almost completely divorced from how the program goes about providing the functionality. Usually, the UI should be up and running before the back end is really started. Most software today is designed the other way around. "We can make software that does this and this and this, now how can we let the user get to those features." The term "user centered" is in contrast to feature or engineering centered. Users should not be designing it, but you do need their input and testing to see what works and what doesn't. Follow the basic rules of UI development and you can miss many obvious problems, but at some point you need users to show you what you missed.
Even Google can't do it... (Score:4, Insightful)
Bottom line, my opinion, users are not lazy, they just want to get some work done without needing the equivalent of a Bachelor's in Computer Science to get that work done.
But what if its simply not possible to make things so simple that average Joe can "just do it"?
Everyone uses Google's search box as an example, but the fact is that that box is the front end of a task that is very easy to describe - "show me a list of documents that more or less relate to these words".
As soon as you stray from there into some of Google's other functionality you are into some far more complex screens that I personally have heard people confused by. Well-designed though they are, it sometimes just takes a fiew fields, links and words to make the interface powerful enough to be useful for the task at hand. This is even more so when there are financial ramifications to the task at hand, immediately requiring history, confirm dialogs, balances, tec etc.
As computer gurus our very DNA is infused with the belief that we can build it, and make it so simple anyone can use it.
Personally, I find that this feeling diminishes as the project progresses. Sometimes because we don't have access to Googe's level of funding for UI design, usability testing, etc. But often, in my opinion, because some tasks simply can't be made simple.
Most Users Just Want to Get On With the Job (Score:5, Insightful)
Most programs seem to come with more bells and whistles than they need to, but then I guess they are trying to provide all the tools that I *might* be looking for in one package. I have never used more than about 10% of the features in any office suite for instance, mostly I just want to present a document containing well formatted text in the font I want.
The only place I appreciate complex software is in the areas where it suits my needs - a good IDE, Editor, graphic and sound manipulation software, and the Games I play. Outside of that most software is more hassle than its worth and I resent having to learn to use new programs just to achieve one tiny task.
I think the answer is coming in individual devices that serve specific functions and don't try to go beyond those functions. My cellphone has no camera, no email, no web-browser etc, but it does let me talk and receive calls. Thats all I need it to do. If I wanted the bells and whistles I woulda shelled out $350 Cdn for a Razor
Re:In my Opinion (Score:3, Insightful)
I think it is reasonable to say that some developers fail to realize that making a program familiar and consistent is very helpful.
Asking on Slashdot? Let the love-fest begin! (Score:5, Insightful)
No frickin' kidding.
If you give users a choice between two mutually exclusive features, they will answer "yes". They will then complain at needing to pick one at runtime (or complain that you didn't include the other option, if you made the choice for them).
If you ask them if they need proveably-never-used features X, Y, and Z, they will vehemently insist they do. They will then complain that the final product confuses them with far too many features they don't need.
If you ask them how they want something to work, they will either A) Shrug their shoulders (then later complain you didn't listen to their input); B) lie to hide their own abusive behavior (then later complain that they can no longer get to their por - ahem - family photos); or C) Give a long, detailed explanation of what they (then ask what madman came up with how the final product works).
Should software 'just work', or are users too lazy?
Both. Software should do one task very, very well. If it doesn't try to manage photoalbums while doing your taxes and making coffee, it can perform its function well while not overwhelming the user with confusing options.
At the same time, users need to realize that computers have FAR more complexity of control than their car. In most states, to drive a car, you need to have reached a minimum age, pass certain tests of physical capability, take a six week training course and pass a written test on that material, and finally take an actual road test to prove you can handle a vehicle - And even after all that, you usually have only a probationary license until you've remained incident-free for a few years. Yet software should "just work"?
Where can I sign up to sue Chrysler over my car not automatically driving me to work (with an unannounced side trip to the grocery store) when I get in and turn on the wipers?
Re:one example of too many (Score:3, Insightful)
I don't think that engineers are lazy, at least, not always. But that statement leads into this one: lazy is subjective. If a programmer failed to implement a feature that I think would probably be easy, then I think he's lazy. Does that make him lazy? Just to me.
I also think that computers CAN be intuitive, but only by more closely mimicking the way we work without computers. Firstly, the mouse is nonintuitive. It's a concept grasped easily enough, but nothing else works that way. A pen or even a simple pointing interface (pointing at things, as in the Wii remote) is dramatically more logical. Arguably though, you can't really call computing intuitive until you can't tell you're computing any more. An immersive environment which is used naturally (through gloves and such) with full haptic feedback and the like is going to be the first intuitive interface... unless we get a useful natural language interface. Both have been a long time coming but the VR thing looks more likely to happen soon simply because all the parts are already here and in use.
Back to the issue of lazy engineers... Perhaps the OS is not doing enough to help them? I mean it would be a lot easier to (for example) manage data if the filesystem were a database. Yet we still haven't seen that happen anywhere but BeOS in spite of everyone and their mother promising it to us. I think tradition is the single largest impediment to advances in computing.
Re:one example of too many (Score:5, Insightful)
That's all good and fine, but there are cases, many, many cases, where users aren't able to use even the simplest interfaces. This can be expected of them, as the people unable to use these interfaces tend to be old people, while younger people immediately know how to use them regardless of previous training because they are at least used to the idea of an interface.
I used to work at wawa, and I can't even tell you how many people used to complain about how the touch screen ordering system was oh so complicated. The entire thing was self-explanatory. You touch what type of food you want, then touch the ingredients then hit complete. Not exactly rocket science. For these people even using a touch screen to manipulate words is something they are uncomfortable with. We cannot stoop to this type of illiterate and design software to accommodate them. They simply cannot be accommodated. People need to learn to read and interact with a basic interface, if they can't, then they will get left in the dust, same as other dinosaurs.
Too much disconnect (Score:2, Insightful)
Too many times a project goes like this: Customer places request. Project Mgr talks to client. Requirement Analyst turns request from PM into low level requirements. Programmer reads requirements document, writes program. User gets program and guess what? It's not what he wanted! So, he places another request, and we are back to square one. Sound familiar?
Users request crazy things. Sometimes, they ask for things to work around other problems. The person writing the software should know, not what the Requirements person thinks the user wants, but what the user is actually trying to accomplish and why they are trying to do this. What is the user trying to do? Then, the programmer should make a proposal and necessary parties should either agree or disagree. This means that some requirements people are out of work, this means that the programmer has to be smart and communicate well, and that he has to spend time talking to users. And therein lies the problem.
We have IT departments that are so fragmented and people in them are so specialized. Programmers often suck at talking to people (and this is a reason why Offshoring is so unproductive). Requirements analysts often have no concept of (programming) reality. Project Managers are MBAs who should be working in marketing. And don't even get me started on what unrealistic timelines to do software. Like the old adage goes, you can pick only two of the following: Good, Fast, Cheap.
The solution? Teach programmers to communicate! Requirements people should also be programmers. Maybe that's where you put the "programmers" who don't quite make the cut. Too many suits in IT, where there should only be geeks. Geeks who know how to communicate. Keep the suits in HR, Financial, Marketing, etc.
More software would "just work" if this approach were followed. One last thing: the user has to commit to a process. You cannot design an application if there are no business processes to code to. If there's a process clearly defined, there more communication, and no death march mandates, software won't suck.
Re:This is just a little bit crazy. (Score:5, Insightful)
The word "save" isn't that hard of a word to grasp. People save money. People save possessions. Saving documents is no different. Grade schoolers understand it.
Part of the problem is that computers intimidate users. They never know if it is going to break when they do something. "Save" is a term that is strongly associated with computers these days. Saving a file and saving changes aren't so much "saving" as they are writing something to a semi-permanent record. They don't fit well with the document/folder metaphor because on paper people save a file or they toss it, they don't save part of a file or undo all the writing they have done in the last hour but keep the file itself and the old work. On the back end saving changes or saving a new file is pretty much the same thing. You write to disk. It is not so in the minds of many users.
What really cracks me up, though, is that he argues that when deleting documents, there should be *no* confirm.
It is hard to see what the author is arguing from this brief bit, but he's right that their should not be a dialogue confirmation. Users already have a trash can they can look through and it properly asks for confirmation. When you delete a file, it goes to the trash and you can always take it back out. The huge number of dialogue boxes, particularly on Windows are a classic design flaw.
If they don't read it, what difference does it make what's included in the dialogue? I've made messages that were very easy, simple to read and understand, only to have them overlooked.
Many dialogue boxes don't even give the user a choice and most users simply click "OK' over and over again until it is a conditioned response. Worse than the number of dialogues is Window's penchant for keeping the buttons the same, which facilitates this behavior. Is it so hard to have it say, "Do you really want to throw this file away, (Throw it away)(Don't throw it away)." With such a message the user must read at least the button, at which point they know what action is being taken because the button is itself an action, not "OK."
Next, the author mentions that error messages need to state *why* something failed. Wait a second... I thought he was just arguing for simpler error messages, but now he wants to know specifically what happened?
Messages need to be fewer and clearer, not necessarily simpler. Adding more information in a dialogue is just fine, so long as it is properly constructed.
There are some rather atrocious error messages out there, but it'd be safe to say that there are more end users out there that don't read things carefully.
Yeah, and dogs salivate when you run the can opener. If you build a system that operant conditions people, you bloody well shouldn't expect them not to be conditioned, especially when they're just trying to get things done and don't care about using the computer at all. It is a tool, and a badly designed one in many ways.
Re:Fine, not lazy (Score:5, Insightful)
Computers, right now, require you to be mechanics to drive the car, and users don't want to be mechanics. They want to get their work done. Part of this is changing user expectation (so that they know to get routine maintaince from someone trustworthy), but part of it is building the systems so they can survive routine wear and tear for an extended length of time, without the intervention of computer 'mechanics'.
Good, fast UI (Score:5, Insightful)
So, exactly like you said, there's less risk in turning the key to your car if there's no chance that sometimes it will mean your car disappears. If there was that chance, you'd have to train yourself to check and doublecheck the state of your car before turning the key. This would slow you down quite a bit, and would be bad UI.
Instead of just deleting the car, the car's UI could confirm with you (similar to popping up a dialog) when it seemed like you were doing something that you might not want to. Or it could keep you from doing it altogether, although that would mean less capability.
However, a better solution is to make everything undoable, quickly and easily. In the case of deleting files, if you delete files, they are deleted. If you save over a file, the previous contents is gone. But if you want to bring them back, make it easy and always possible. For much of computing history, that wasn't really feasible, due to performance and storage constraints, so they opted for confirmation dialogs. But those technical limitations are much closer to being removed now, at least for simple interactions by untrained users. For those playing at home, see Apple's Time Machine [apple.com]. For more complex interactions, pushing the limits of the machines further, I imagine you'll still rely on better-trained users.
Re:Let's draw back... (Score:5, Insightful)
When asked why it took him so long to get to a free land, he replied that they had to wait for all the former slaves to die off, since only their kids could be trully free.
My point here is that most kids (9+ years) these days have no problem getting their family computer to do just about anything they need, so in 20+some years when their parents will pass away, all the 'luser' issues will go away.
Re:Of course it should just work. (Score:3, Insightful)
To take this one a little farther: If you give someone a guitar rather than a radio, they can produce content. The person with a radio can only consume. Producing content will always be more complicated than consuming it (law of entropy-ish).
(Tangent: There are definitely different degrees of difficulty on the production side, though. There was an article I saw (probably on here) about interface design needing to be simple but powerful. A lot of interfaces can get very powerful, but very complex (see Vim, of which I'm a fan, but still), or very simple, but very weak (see Notepad, to stick with editors). A new user needs the simplicity, and an experienced user needs the power.)
Re:one example of too many (Score:5, Insightful)
From a technology standpoint we programmers think in terms of how the underlying stuff works. To us, it's clear what hibernate and standby are doing, why they're different and what the relative advantages and disadvantages of each technology are. However, in being so focused on the underlying technology and how it works, we start overlooking the problem that both technologies are trying to solve, which is this: how to extend the life of a computer (computer's battery, in the case of a laptop) when a computer is left on but is not in use--and do it in such a way that the computer can come back on relatively quickly when the user comes back.
Users want us to solve problems, we want to provide technology.
And so when the user wants to solve the problem "I walked away from my laptop for an hour; please make it so the battery doesn't drain dry when it is idle", we come back with "well, we have sleep and standby and hibernate; hibernate is really cool because the computer is almost completely powered off but standby allows the computer to come back a lot faster"--of course we're going to get a glazed look on the poor user's eyes. All he wants is to come back, jiggle something, and have the computer come back to life.
Unfortunately because we talk about providing technology and the user wants to solve problems, we then wander off grumbling "stupid lusers; they're not willing to learn how to use their computer." And the poor users stumble off grumbling "why do they make these damned thing so hard to use? I don't care about bits and bytes; just tell me what I need to do so I can get my important work done."
The really ironic part is that users are not stupid--contrary to about 90% (caution: made up statistic) of technologists complaints. They just happen to have a different job than us. I mean it's easy for us to look at some poor overworked doctor (for example) and claim he's a moron because he doesn't know the difference between suspend and hybernate--but then, the reason why he doesn't know the difference is because he's more worried about knowing the difference between opioids and non-opioid drugs and knowing which class of drugs will better relieve his poor cancer victim's pain.
Re:This is just a little bit crazy. (Score:3, Insightful)
In addition, here in the UK, almost all cars have manual transmission. I can't remember the last time I got into a car in the UK that had automatic transmission. You can get automatic transmission if you want (on probably almost any new model now, I'm guessing), but you have to request it.
Does this mean that the UK is populated entirely by programmers? If so, how come I have to help people with their computers so often?
Re:Fine, not lazy (Score:3, Insightful)
false dichotomy (Score:3, Insightful)
These choices are a false dichotomy. It is possible to have products which just work and which allow users to access more advanced features (and rewards them for learning a little more about what they're trying to use). The UI principle [which should be] at work is called "progressive disclosure": don't overwhelm the user with stuff they need to know or complex steps they need to follow for basic tasks to be accomplished, but let them work their way up to it.
A good example is the UI of a well-designed VCR. Power-on and Play are big buttons right on the front, and the more complicated stuff is behind a flip panel. My non-/. parents don't want to program a Mars rover; they just want to put in the tape of their grandchildren and watch it. On the other hand, my wife who doesn't want Tivo programs complicated, recurring weekly recording schedules; and she took the time to learn how to do it, and has figured out which VCR you just hit Power-off and which VCR who have to hit Power-off and Timer together. And I just want to flip the panel and find some arrow buttons so that my parents' VCR isn't flashing 12:00 while I'm trying to visit with them.
If you want to do something more sophisticated, you need to expect to learn a little about the application you're using; and IMHO most reasonable people are willing and try to do so. But you should be able to just push Play without knowing which codec was used.
Re:one example of too many (Score:3, Insightful)
But why why WHY? This is basically what most people in the Office thread seemed to be bitching about, but it looks like most of them never tried the new system. "Whaaa it's now what I'm used to!!" they said, and so do you.
Maybe the old menu idea is broken, ever though of that? Or if it's not, it's simply not the best solution now that the average display resolution is so much higher. Sure, there are plenty of programs which try to come up with their own idiotic skinnable interface which are almost always completely unusable. Reminds me of certain WMP versions and some other software I'd rather not mention.
But if done right, like in the Office case IMO, it can can help the new users familiarize themselves with the software quickly, and increase efficiency of power users.
Re:Oh, one more thing (Score:2, Insightful)
Re:Software. Not currently Science or Engineering. (Score:5, Insightful)
I'm a developer, not an engineer. To me, that means that I don't follow any formal methodology, don't belong to the local professional engineering organization, and don't necessarily have a degree. My style is more based on what I learned in my High School English courses than anything else, and is largely the result of many years of experimentation.
That description is the reason you either want or don't want a Software Engineer. Engineering is a slower process. It is rigorous and formal and based on mathematics. The results can be exactly duplicated, even if you have entirely different engineers working on it. When I write software, I do what many people call "hacking". Often, I write only the documents that are required to firmly establish the concept in my mind, then just keep writing and debugging code until it works. For many applications, I will write software that is equally robust in less time. That's because you don't need an engineer to design a blogging application.
Software Engineering is used in much larger, mission-critical applications, like a financial institution's transaction processor, or a real-time monitoring system, etc. Mistakes cost millions of dollars or even lives, so every possible scenario needs to be considered up front (BDUF). Hacking isn't like engineering, and that's one process of producing software. Software Engineering is exactly like engineering and that's another process of producing software.
mandelbr0t
Re:one example of too many (Score:4, Insightful)
If it was really self-explanatory, then they wouldn't have a problem with it, now would they? Unfortunately, what might seem self-explanatory in hindsight to the developer, or to someone like yourself that's around it 8 hours a day, can be completely baffling to someone that's never seen it before. Take, for example, all those web sites with Flash navigation that force you to poke around with your mouse, trying to guess where where the menu is. The developers that created those atrocities thought they were 'self-explanatory' too ; the rest of us want to beat the developers with a stick.
Same old argument, dressed up.. (Score:3, Insightful)
If you want to toast bread, buy a toaster. If you want to print photos, get a photo printer, no computer necessary. If you want to play a game, load up a Playstation.
Why buy a computer?
Because you're getting a multi-function device. That's putting it simply. It's a nearly unlimited-function device. Everyone wants to do something different with them. How simple can you possibly make something like that, and yet still have it be useful?
I really don't buy the whole "Computers are too difficult" argument anymore. You sit anyone down in front of ANY machine now (Windows, KDE/Gnome, MacOS) and they'll play around and figure out how to open up the web browser. They'll click the mail icon and get to e-mail. They'll find a word processor if one is installed. I mean, you really gotta be a bottom of the barrel dipshit to not understand how to move a mouse cursor and click things. No degree required.
So you're presented with a user interface, while not perfect on any system, that's pretty easy to figure out. If you can figure out how to plug the computer in, you can figure out how to use it in a basic way. The moment you want to do something other then the basics, you move squarely out of toaster land.
That's not to say things couldn't be better (and improvements are made all the time) but I don't share the doomsday view of people in general,; with some odd disposition to not be able to use computers as computer users. The only way I can see some giant leap in computer usability will be when you can talk plainly to them, and get responses from an AI-type system. Think Star Trek.
Plus, let's be realistic: If computers were THAT hard to figure out, why in god's name have so many of them been sold? Wouldn't the word be out by now, that you need a degree to use them?
Re:one example of too many (Score:3, Insightful)
Ah, but couldn't you make a system that simply saved changes in realtime? Why should we expect users to save their work? Is it just the principle of the matter? Is saving work some kind of fundamental lesson, without which users would become lazy or complacent or something?
Automatic transmissions have sheltered people from having to worry about specific gears. Why not shelter computer users from worry about specific files?
The bottom line is that most people simply don't care about the underlying complexities of computers or automobiles. It isn't laziness. And it isn't necessarily a limitation on their part. They just don't care. Computers just don't interest most people like they might interest you or I. So if there are unnecessary complexities in an interface, I say take them out. If people can find 90% of what they want on the internet, for example, with a single Google input box, that is ideal. It would be counterproductive to present each user, by default, with a complex "advanced" search form just because it might make searches slightly more effective.
I think it will surprise you. Going back to the car analogy... think of how "savvy" people have gotten with cars. They know all the brands. Have some idea of different fuel types. They know the difference between an SUV, a sedan, etc. But you know what? After 100 years of automobiles, the vast majority STILL don't understand any of the internal complexities. And in manys they know even less because cars today are generally so reliable (relatively speaking) there is little reason to even open the hood.
I think in 2030users will be savvy in the sense that they are savvy with automobiles today. They'll know how integrate computers into their lives and even do some very basic maintenance, but I am willing to bet that they won't be any more knowlegable, on average, of the internal complexities of computers than they are today. Remember, savvy does not mean "highly technical knowledge." It just means that you know the ins and outs of daily use without much hassle.
-matthew
Re:This is just a little bit crazy. (Score:1, Insightful)
Wanting less work != lazy (Score:5, Insightful)
1: Pr0n
2: Games/entertainment
3: Communication
4: Doing our work for us.
Building machines to do your work for you does not make you lazy. Using the machine that someone else built also does not make you lazy. In both cases, the machine is freeing you from a mundane burden so you can do something else more useful with your time. Making efficient use of the tools available to you is not laziness.
Laziness is when you push your own responsibilities off on to other people, without paying them for it (like, you know, leaving your dirty dishes in the office sink so your coworkers can wash them for you). Yes, payment absolves you of laziness since it is ultimately an economically productive action in and of itself.
Paying a developer for a program that "just works" isn't lazy, it is efficient.
End users don't like a complicated interface. Why should they? The less complexity they have to deal with, the more time they have to do something else that is useful.
Yes, some amount of complexity is going to be unavoidable. That's a fact of life. Users will naturally resist it as much as they can, but ultimately accept what amount of it they cannot escape. This is not a vice on their part, it is just a path of least resistance.
If you can design an optimized balance between complexity, intuitiveness, and productive outcomes in your user interface, your product will do well.
It is that simple.
Unix is NOT consistant (Score:3, Insightful)
Conspiracy of Toonses (Score:4, Insightful)
A problem of interface (Score:4, Insightful)
To start a car, you turn the key clockwise. To open a new file, you click with the mouse.
To stop a car, you push the brake pedal. To save a file, you click with the mouse.
To turn a car off, you turn the key counter-clockwise. To delete a file, you click with the mouse.
A significant factor in the difficulty of software use is that when we speak of "interfaces" we are almost always thinking one level lower than we should be - that is, no matter how nice and clean and useful your GUI is, the real interface for ~90% of software users is the mouse, keyboard, and monitor, regardless of what is displayed on it. In a car, turning the car on or off is an entirely different motion than making a right turn, which is different from putting on the brakes, which is different from putting down a window. We also have years of experience riding in cars and watching parents drive as children to teach us that "when Daddy does X, Y happens."
Computers are fundamentally different. Using only a mouse and keyboard and looking at a monitor is for all intents and purposes the only way to interact with the computer. Watching others use it to learn doesn't work nearly as well because the movements involved are much more precise, less varied, and their effects vary greatly depending on what state the computer is in: moving the mouse in a word processor moves the pointer around, while in Quake it'll change your view of your in-game surroundings.
Encouraging software makers to adhere to user-interface models helps a lot -- once the users are familiar with the model. Our current practices are inconsistent at best - the "desktop" metaphor exists only at the most basic level; once an application is open there is generally a half graphical, half menu-driven approach. From what I've seen, I think the Ribbon interface in Office 2k7 is an improvement, albeit an incremental one. I don't pretend to have a good model that will help ease-of-use, but I think the problem is on the decline anyway.
Those of us who grew up with computers do not have issues with the mouse/keyboard interface; we are familiar with it and the software models underneath. I have a feeling that as younger generations join the workforce, the interface problem will disappear or at least be greatly reduced. As long as some consistent GUI guidelines are followed, I believe that the metric for "ease-of-use" will evolve so that more complexity and control can be folded into the software without complaints from the users.
Re:Fine, not lazy (Score:3, Insightful)
From http://drivingrules.net/cdl/needaCDL.htm [drivingrules.net]
DO YOU NEED A COMMERCIAL DRIVER'S LICENSE? You need a CDL if you operate any of the following vehicles.
Re:one example of too many (Score:1, Insightful)
software sucks because it is complex to build (Score:2, Insightful)
How will programming language improvement help towards improving applications? well, here is how:
1) by automating the task of writing multithreaded software. It is doable.
2) by automating persistence. Since writing to memory makes a page dirty in the swap file of the O/S, why do we need to write data to files anyway? pages get written to the swap file periodically anyway, so there is not point in using I/O APIs. Persistence should be automatic.
3) if #2 happens, then the need for complex databases with data types incompatible to programming languages goes away. A database can simply be a linked list or a hash map.
4) if #1 and #3 happens, then there is no need for raw files any more. 'Files' would be typed, and thus easily managed by applications. Since files could be manipulated by any program, the application-centric paradigm would be a thing of the past, allowing for a much wider range of mini applications to be programmed on a user-request basis.
5) by automating data updating using the time-of-request trick and thus saving us the burden of manually doing it for every piece of information.
6) by enabling garbage collection at levels of computer activity, there would be no crashes and unexpected things.
7) by using proper type systems that do not leave room for errors, much more time can be devoted to better application design.
8) by making distribution of computing tasks over a network transparent, application programming would be orders of magnitude easier.
As it stands right now, 80% of an application's code has nothing to do with with the user requirements. Most of the code is for providing the necessary infrastructure and abstractions for the really useful code to run. If we programmers get rid of this burden, then applications and user interfaces will be improved tenfold, as we would not need to spend our times in things we should not supposed to.
One Idea (Score:2, Insightful)
Obviously the other interface will be the full fledged nerd interface. This will have ALL the available options and will not baby the users through anything at all. This way as users become comfortable with the program, or are just good with computers in general, they can switch over to a more advanced interface. This approach always seems like a good idea to me...you can have all kinds of features, but you can build in some defaults for the people that just want it to work. When they figure out how it works...they can dive in at their own perrogative.
Re:Fine, not lazy (Score:3, Insightful)
Re:one example of too many (Score:4, Insightful)
If there is one thing that Windows does well, it's consistency. Cut, copy and paste work the same in 99% of Windows programs. They do NOT work consistently across Linux programs that often use different underlying graphics toolkits, etc., and different shells, etc. Also every single one of the command-line utils in any flavor of *nix has a unique syntax. That's not consistency.
I'm not trashing Linux or glorifying Windows, but let's give credit where it's due. Windows is remarkably consistent and that is probably one of the main reasons for its commercial success.
Re:Good, fast UI (Score:3, Insightful)
Yet VMS had versioning ages ago.
Re:one example of too many (Score:3, Insightful)
People need to learn to read and interact with a basic interface, if they can't, then they will get left in the dust, same as other dinosaurs.
Or, more likely, they'll learn to shop at someplace other than wawa where they can get service from a real human being.
Peace be with you,
-jimbo
Re:Wanting less work != lazy (Score:4, Insightful)
1: Pr0n
2: Games/entertainment
3: Communication
4: Doing our work for us.
Well, yeah, but the same could be said for paper products.
I think that there are either;
-many other categories, such as art, research, etc,
-or only one category, which I would call "Stuff".
Re:Perfect timing-Telling a story. (Score:2, Insightful)
The situation I recounted above is just one example of the problems with this system. And, despite what some people may think, these things *do* impact the production side.
The University of Wisconsin (Madison) went through a lengthy bidding process and chose the same vendor as the one we're using. One year and $27 million later, they dumped the whole thing in the trash because the users found the application too difficult to use.
Think about that from the developers' side: their lack of understanding of the users' requirements cost their company a $27 million contract. The application is amazingly powerful, and after 2½ years, we *still* find new features, so the "functionality" side of things isn't the issue. The "ease of use" *is*. The lack of understanding regarding this difference on the part of the developers (and the company as a whole) has cost them a major client. And it's harmed their reputation. If staff at one of the top engineering colleges in the world can't learn to use the product, what does that say to non-technical businesses who are looking at buying it?
As a (reasonably) tech-savvy user who's had 2½ years to learn the ins and outs of this application, I can say--with a high degree of confidence--that the UI sucks wet donkey balls through a bendy-straw. I love the power and depth of the application. I hate that, when looking at a data pool of hundreds (if not thousands) of records, it will only show me 4 at a time, and it requires a new querry (via the web) to get the next 4.
Oh, what I wouldn't give to be able to sit down with the developers for a day! I'm talking about the simplest of changes: keeping the "function" buttons in the same order on every page so that I don't have to hunt around for them; giving me access to more than 4 records at a time; highlighting which of the 12 "comments" options actually have data in them; grouping relevant data together in the display; extending fields beyond 32 characters;
Absolutely none of this has to do with the "functionality" of the application. I'm not telling the developers and programmers how to do their jobs. I simply want to tell them how to *present* the data, and how to make the display and wording more intuitive (e.g., in one screen, "accept" means "leave it alone and do stuff to it later", while "adjust" means "accept what's displayed").
Re:Where I work... (Score:3, Insightful)
Re:Fine, not lazy (Score:1, Insightful)
Surely he was meaning something along the following: 1 car (four wheels), 1 person in it - 1/4 person weight per wheel, and associated stresses 1 bus (four wheels, for sake of argument), 10 people in it (driver+9 passengers) - 2 1/2 person weight per wheel, and associated stresses.
I'm not an engineer, only did physics and maths to A Level, but it seems to me that the bus puts more stress on the road, in the above example, than the car. I could be missing something that is obvious to an engineer, and that is non-intuitive.
???