Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Microsoft Operating Systems Software Windows

Why Vista Took So Long 761

twofish writes, "Following on from Joel Spolsky's blog on the Windows Vista shutdown menu, Moishe Lettvin, a former member of the Windows Vista team (now at Google) who spent a year working on the menu, gives an insight into the process, and some indication as to what the approximately 24 people who worked on the shutdown menu actually did. Joel has responded in typically forthright fashion." From the last posting: "Every piece of evidence I've heard from developers inside Microsoft supports my theory that the company has become completely tangled up in bureaucracy, layers of management, meetings ad infinitum, and overstaffing. The only way Microsoft has managed to hire so many people has been by lowering their hiring standards significantly. In the early nineties Microsoft looked at IBM, especially the bloated OS/2 team, as a case study of what not to do; somehow in the fifteen year period from 1991–2006 they became the bloated monster that takes five years to ship an incoherent upgrade to their flagship product."
This discussion has been archived. No new comments can be posted.

Why Vista Took So Long

Comments Filter:
  • by October_30th ( 531777 ) on Monday November 27, 2006 @12:36PM (#17003586) Homepage Journal
    So, Microsoft has finally adopted the Linux development model?
    • by nmb3000 ( 741169 ) on Monday November 27, 2006 @12:58PM (#17003940) Journal
      So, Microsoft has finally adopted the Linux development model?

      To call it "the Linux development model" is somewhat arrogant I think. It appears more that Microsoft is trying to take their time and putting in extra effort to make this release literally the best Windows release to date, because the last thing they want is another Windows ME. This process applies to any software group, be it OSS, Apple, IBM, and yes, Microsoft.

      To borrow a quote from Shigeru Miyamoto, "A delayed game is eventually good, a bad game is bad forever." I think that applies to pretty much any software project, though of course "good" is relative to the user.
      • by M1000 ( 21853 ) on Monday November 27, 2006 @01:20PM (#17004380)
        To borrow a quote from Shigeru Miyamoto, "A delayed game is eventually good, a bad game is bad forever." I think that applies to pretty much any software project, though of course "good" is relative to the user.
        Wow, Duke Nukem Forever ® is sooo going to be good !!!
      • Re: (Score:3, Insightful)

        by diersing ( 679767 )
        "A delayed game is eventually good, a bad game is bad forever."
        You don't account for a delayed game being bad, just because something is delayed doesn't make it inherently good. And when you get a delayed-bad game, its double bad because you, to quote my uncle Ray, "had to wait for that crap".
      • by operagost ( 62405 ) on Monday November 27, 2006 @02:20PM (#17005362) Homepage Journal
        To borrow a quote from Shigeru Miyamoto, "A delayed game is eventually good, a bad game is bad forever."
        Unless your name is Derek Smart.
      • by MidKnight ( 19766 ) on Monday November 27, 2006 @02:31PM (#17005504)
        Do you really think Microsoft has been delaying the Vista release in order to make it the best Windows release to date? That seems ignorant of the history of the project to say the least. Here's what I remember:

        The Longhorn project was officially started in 2001 (or possibly earlier). Longhorn initially had a number of OS-level features that would've made it on par with some other OS's in the same time peroid, had it been released in its original time window (late 2002, I believe). By my recollection of events, they originally started with the Windows 2000 Server codebase, and attempted to bolt the new fancy features onto the side of it. The effort failed miserably.

        By 2003, Microsoft had realized that doing "add-on" development to Windows 2000 was a lost cause, so they literally called a do-over: this time they started with the WinXP Update 2 codebase. By the start of 2005, they were still having serious trouble getting all the new features to play well together, so they started removing them one by one. By 2006 all of the exciting new OS features had been removed, except for the new display API. This became the new feature set of the Vista release: eye candy.

        Feel free to correct my from-memory summary of the history of the project. But my point is that they weren't polishing the silverware until it shone brightly; they were just trying to get the dinner table set before it was time for breakfast.
        • by notaprguy ( 906128 ) on Monday November 27, 2006 @02:57PM (#17005976) Journal
          You're partly right. The "Longhorn reset" - when they decided to largely throw out more than years worth of work - came about because they were overly ambitious. They were trying to re-write major portions of the platform. They realized that doing so was not only going to be too difficult/take too much time but that customers didn't really want that. So they did a reset...significantly reduced the origional ambitions of the project so they could get it done. Whether that's a good thing or bad thing is in the eye of the beholder. In my mind it was probably good because, despite the rantings of some on /. and elsewhere, Windows actually works pretty well for most people and organizations. Re-writing the whole thing would have probably cause more harm than good. Just my personal two cents.
          • Re: (Score:3, Insightful)

            by MidKnight ( 19766 )

            Re-writing the whole thing would have probably cause more harm than good. Just my personal two cents.

            I wholly agree: from the external perspective, it sounded like a lot of the developers fell into the classic S/W development trap: re-write something for the sole reason that "We can make it better this time". Very rarely does this ever fit a customer's actual desires... but developers almost always want to do it anyway (myself included).

            I'd love to hear the internal perspective of how the 'reset' decision

            • by Maxo-Texas ( 864189 ) on Monday November 27, 2006 @03:41PM (#17006646)
              I agree that developers have a problem with this.

              But...

              There is never an ROI on doing code cleanup and making it easier to maintain from a manager / new development programmer's perspective.

              As a maintenance programmer tho... I see faster, more stable, easier to maintain code out of even the little things I manage to sneak in. A solid code cleaning can cut weeks or months off of other projects on the same code base. From everything we've heard- windows source is a mess.

              What they probably need to do is spend 6 months and do an architectural code cleanup. There would be no immediately ROI however every project for the rest of time would benefit so theoretically their ROI is infinite. :)

              As a maintenance programmer, I've frequently taken multiple pages of code out of programs without changing their functionality. In a large number of cases products are shipped by the development staff with dead code, goofy code, very inefficient code, redundant code, etc.

            • Unit Testing (Score:3, Interesting)

              MidKnight wrote:

              I wholly agree: from the external perspective, it sounded like a lot of the developers fell into the classic S/W development trap: re-write something for the sole reason that "We can make it better this time". Very rarely does this ever fit a customer's actual desires... but developers almost always want to do it anyway (myself included).

              Of course, the decision to not re-write and keep ugly legacy code itself (rather than just the API) isn't always the correct one either. The judgement of

        • by man_of_mr_e ( 217855 ) on Monday November 27, 2006 @03:23PM (#17006398)
          While you essentially have a somewhat correct position, all your facts and deductions are wrong.

          1) Longhorns original schedule was mid-2003 (Whistler Server (eventaully called Windows 2003) had been scheduled for 2002 for almost a year before XP Shipped).

          2) Longhorn started with the XP codebase.

          3) The Longhorn reset started with the Windows 2003 SP1 codebase.

          4) The "Reset" happend in 2004, not 2003.

          5) It was not "add-on" development, it was essentially re-architecting the entire OS to be .NET based, something which nothing was really ready for, and was far too large of a job.

          6) They didn't have problems "getting the features to play well with each other", they simply weren't ready, and wouldn't be ready for the OS ship. In the case of WinFS, it was simply an over-architected solution to a simple problem that was much better solved by simple indexing.

          7) Not "all" of the exciting features were removed. As I said above, WinFS turned out to be something that wasn't really needed or wanted. Monad was relagated to ship post launch, EFI turned out to be useless because no computers were using it in consumer PC's, and NGSCB (Palladium) was so highly criticised that nobody wanted it anyways.

          The features that were dropped were largely irrelevant, or unwanted, meanwhile the list of things that are new in Vista is huge. Check out the wikipedia entry:

          http://en.wikipedia.org/wiki/Features_new_to_Windo ws_Vista [wikipedia.org]

          Now, that may still not be enough for a lot of people to upgrade, or they may not be features a lot of people really care about, but to claim that "all the exciting new OS features had been removed" is simply bogus.
  • by InsaneGeek ( 175763 ) <slashdot@@@insanegeeks...com> on Monday November 27, 2006 @12:37PM (#17003630) Homepage
    Every single organization seems to follow this exact same path. Lean and mean at first, to fast and nimble second, to large but feature, to slow and bloated. The next step after this tends to be, jump at any and all projects to see if anything will stick progressing slowly down a spiral with a large change either acquisition by another company or dramatic slashing of middle-management workers and projects to focus on their core. Unfortunately I have yet to see a large organization that doesn't seem to go down something similar to this path.
    • by neoform ( 551705 ) <djneoform@gmail.com> on Monday November 27, 2006 @12:49PM (#17003806) Homepage
      Maybe that's why ID http://en.wikipedia.org/wiki/Id_Software [wikipedia.org] still only has 31 employees?
    • by Overly Critical Guy ( 663429 ) on Monday November 27, 2006 @12:53PM (#17003856)
      Microsoft needs a Steve Jobs-ian spring cleaning. For those unaware, when he returned to Apple, he called project leaders into a conference room and had them justify their existence. If they couldn't do it, the project was scrapped. The company was streamlined to focus on a few core product lines.
      • wrong Steve (Score:5, Funny)

        by ronanbear ( 924575 ) on Monday November 27, 2006 @01:40PM (#17004740)
        They can't afford that Steve.

        They're stuck with the other one
    • by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Monday November 27, 2006 @12:58PM (#17003934) Homepage
      Nintendo [wikipedia.org]? It's 117 years old, and able to release a much hyped console.
      • by rlp ( 11898 ) on Monday November 27, 2006 @01:11PM (#17004214)
        Nintendo? It's 117 years old, and able to release a much hyped console.

        It's changed business models a few times. It started out as a playing card company. If you want to discuss a successful long-lived organization - look at the Catholic church. It's been around for two thousand years. It's got just a few layers of management and at the top 183 cardinals report to the Pope.
      • by nelsonal ( 549144 ) on Monday November 27, 2006 @01:13PM (#17004242) Journal
        Successful organizations arise anew from the ashes of their destruction (and you thought the Phoenix was just a cool story to scare children). Paragraph 2 and 3 [wikipedia.org] in the middle life section of the Nintendo article covers the rise, dilution, decline, and fall of Nintendo (which had diversified into taxi's, love hotels, network TV, food, and other products) resulting in near bankruptcy before they hired Miyamoto and completely changed the company's focus.
    • by hey! ( 33014 ) on Monday November 27, 2006 @01:50PM (#17004898) Homepage Journal
      From the famous Halloween Memo II :


      The biggest future issue for Linux is what to do once they've reached parity with UNIX. JimAll used the phrase "chasing taillights" to captures the core issue: in the fog of the market place, you can move faster by being "number 2 gaining on number 1" than by being number 1.


      Conversely, when you are far enough past your competition, you have to decide where you want to go. Microsoft's business vision looks backwards (defensive) and sidewards (leveraging its unique position in desktop os and office software to gain entry into new markets and new revenue streams). They don't seem to be looking where they are going, because they're already where they want to be.
  • by suso ( 153703 ) * on Monday November 27, 2006 @12:41PM (#17003670) Journal
    Because it had to move through the digestive tract and on through the large intestine.
  • Why RTFA? (Score:5, Insightful)

    by Sloppy ( 14984 ) on Monday November 27, 2006 @12:43PM (#17003690) Homepage Journal
    Doesn't "the approximately 24 people who worked on the shutdown menu" already tell you everything you need to know?
    • Re:Why RTFA? (Score:5, Informative)

      by stevesliva ( 648202 ) on Monday November 27, 2006 @12:56PM (#17003896) Journal
      Doesn't "the approximately 24 people who worked on the shutdown menu" already tell you everything you need to know?
      No, it's worse than that:
      In small programming projects, there's a central repository of code. Builds are produced, generally daily, from this central repository. Programmers add their changes to this central repository as they go, so the daily build is a pretty good snapshot of the current state of the product.

      In Windows, this model breaks down simply because there are far too many developers to access one central repository -- among other problems, the infrastructure just won't support it. So Windows has a tree of repositories: developers check in to the nodes, and periodically the changes in the nodes are integrated up one level in the hierarchy. At a different periodicity, changes are integrated down the tree from the root to the nodes. In Windows, the node I was working on was 4 levels removed from the root. The periodicity of integration decayed exponentially and unpredictably as you approached the root so it ended up that it took between 1 and 3 months for my code to get to the root node, and some multiple of that for it to reach the other nodes. It should be noted too that the only common ancestor that my team, the shell team, and the kernel team shared was the root.
      Sounds like an even better way--better than adding even more people--to ensure that nothing good is ever invented outside of isolated development silos, and that bugs in code won't pop out until months after it was checked in.
  • by carvalhao ( 774969 ) on Monday November 27, 2006 @12:46PM (#17003756) Journal
    ...uninstalled Vista instead? Now that would be a simple way to solve the matter.
  • Why not? (Score:5, Insightful)

    by The Living Fractal ( 162153 ) <`moc.liamtoh' `ta' `rratnanab'> on Monday November 27, 2006 @12:49PM (#17003788) Homepage
    People would want Vista if it were revolutionary. But you can't just sit down and say 'let's make something revolutionary' and then set up a timeline and claim to be able to create a revolution within that timeframe. Revolutions happen by accident if at all, not on purpose.

    So why hurry? For money? In my experience hurrying to make money never works out.

    TLF
  • Huh? (Score:4, Insightful)

    by voice_of_all_reason ( 926702 ) on Monday November 27, 2006 @12:49PM (#17003794)
    I don't get the new cult of never turning your PC off. If I'm away from my computer, it's usually for an extended period (IE - a night I'm not downloading crap, or a full day of work). Doesn't it make vastly more sense to not have the power supply fan running for those 8 hours? Or the HD randomly going idle and then spinning up again? When I'm done, I shut the machine down and turn off the power strip. Interested in why others don't, however.
    • Re:Huh? (Score:5, Insightful)

      by n0rr1s ( 768407 ) on Monday November 27, 2006 @01:11PM (#17004210)
      A bunch of reasons:
      1. I like having my computers available instantly when I want to use them.
      2. Turning a machine on and off many times can be harmful, so it is said. Others say it's a myth. I don't know who to believe, but it seems feasible that this could be so.
      3. I run back-ups and virus checks during the night.
      4. The computers work on protein-folding during their idle time.
      5. My machines are in my bedroom, and they keep me nice and warm at night. Besides, there's nothing like the low purr of case fans so send you off to sleep :)
    • Re:Huh? (Score:5, Interesting)

      by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Monday November 27, 2006 @01:16PM (#17004294) Homepage

      Well with a Desktop you can suspend to disk and then come back rather quickly, with a power off in between. This way you get the power savings, but you also get the fast "boot" time.

      But let's look at me. I had a Dell laptop at school. I'd use it at home. Turn it off. Take it to school. Turn it on for class. Use it. Turn it off. Take it to next class/home and repeat. Suspend was very iffy (and didn't help much in the battery life department).

      Then I got a Powerbook G4 (which I still use today). Run it at home. Close the lid. Take it to school. Open the lid. IT WAS READY. Within 3 seconds I could start working. When I'm done? No "Start->This->that" to be sure it worked. Just close the lid. I know some PCs worked that way, mine never did (reliably) that I remember. Next class/home? Open the lid. If it got low on power, I'd plug it in. My little laptop has had up to 3 months of uptime (mostly due to major security updates that require restarts). I NEVER need to turn it off. The last time I did was when I was going on an airplane (didn't know if they'd like it suspended during takeoff/landing). It boots relatively fast, but nothing compared to waking up and going to sleep.

      If you're a desktop user, I understand your comment. But as a laptop user who has had the pleasure of a Mac, a fast reliable suspend is a HUGE time saver.

      Now I'll note that some other people at my school had newer laptops that could suspend/resume just fine. But they took much longer. Some of them approached boot time length, some could do it in 20-30 seconds. No PC there matched my Mac (note: I never asked the few Linux users if they had it working on their laptops). I could suspend/resume my Mac 3 times with ease in the time it took the fastest XP users (and I'll ignore the "Click here to sign on" screen most of them didn't disable).

      • Re: (Score:3, Informative)

        by MBCook ( 132727 )
        I forgot to mention that suspend on my Mac takes next to no power. When I wake it up I've seen it's little battery indicator say that it has enough juice for 10 days or so. I seriously doubt that, but I've left it suspended all day with no AC adapter and seen next to no battery loss so it may be possible. It's not as little power use as turning it off, but the time savings are enormous.
    • Re: (Score:3, Insightful)

      When I'm done, I shut the machine down and turn off the power strip. Interested in why others don't, however.

      Remote access. I'm pretty sure that's why we programmers don't have laptops at my office in the first place. Hard drive speed and the fact that we all have home computers are probably factors, as well.

      I do turn my home computer off, but my wife doesn't. She likes to have every web page she checks often constantly loaded, and ready for her when she sits down to her computer. I prefer to close any
    • Re:Huh? (Score:5, Insightful)

      by metlin ( 258108 ) on Monday November 27, 2006 @02:46PM (#17005760) Journal
      How about, "I like preserving a particular state my machine is in?"

      If I'm working on code, I've several editor windows, compiler and terminals open. And usually, if I have to shut down my computer, that would imply I would need to close all those windows and all those applications. Why should I do that when I could just have my computer hibernate or sleep?

      I mean, if I am on Linux, I have four active desktops with several browser windows, code and other things.

      Shutting down my system implies closing down everything and starting afresh. Why should I, when I can put my system to sleep and restart it with my windows and state preserved?
      • Re:Huh? (Score:5, Funny)

        by Tibor the Hun ( 143056 ) on Monday November 27, 2006 @06:31PM (#17009214)
        Ever try explaining the benefits of virtual desktops to a person who doesn't even think a tabbed browser is needed?
        That's one of the previous Unix admins I worked with.
        He was so clueless about his boxes that every week he'd say "I just wish I had windows servers instead."
  • by Paul Crowley ( 837 ) on Monday November 27, 2006 @12:49PM (#17003804) Homepage Journal
    From the article:

    "Windows has a tree of repositories: developers check in to the nodes, and periodically the changes in the nodes are integrated up one level in the hierarchy. At a different periodicity, changes are integrated down the tree from the root to the nodes. In Windows, the node I was working on was 4 levels removed from the root. The periodicity of integration decayed exponentially and unpredictably as you approached the root so it ended up that it took between 1 and 3 months for my code to get to the root node, and some multiple of that for it to reach the other nodes."

    Monotone, BitKeeper, git, bzr, and so on would all handle this situation efficiently and gracefully; all the repositories can sync to each other and none need be more than a few minutes out of date. Amazing that Microsoft's solution is so poor by comparison
    • by Chirs ( 87576 ) on Monday November 27, 2006 @01:03PM (#17004018)
      It's not quite that simple.

      When you get beyond a certain stage of complexity, you need to change the mode of operation. You can't just have everyone submitting random changes.

      You have a subgroup of people that work with each other. When something is stable, it gets submitted to the integration branch. Periodically the integration branch is tested and verified that all the various things feeding into it interwork with each other. That stable version is then propagated into the other teams for them to work with.

      Linux uses a variation of this. People work off the mainline tree. Riskier stuff is in the -mm patchset, so if you want to play with it you need to sync from multiple places.

      The real problem with the scenario as described is the repository organization, likely not in the repository tool. There should have been a way to manually make a child stream that started with the stable version, then pulled in the latest changes from the kernel group, the tabletPC group, and the shell team. That would have allowed them to all work together and see what each group was doing.
    • by bmajik ( 96670 ) <matt@mattevans.org> on Monday November 27, 2006 @01:53PM (#17004974) Homepage Journal
      I work on a different project (not windows) and use the same repository system. (not the same actual repository, of course)

      The branching / merging etc in the tool set (which btw we didn't invent, we source licensed from someone else and then have been continually improving) are quite good actually.

      I don't know for a fact that the systems you mention arne't "up to the job", but how many multi-TB bitkeeper repositories are there? How many concurrent developers do any of these support? How many branches? How often are RI/FI done? How often do developers sync? What is the churn rate?

      I think you also don't understand the problem. The SCCS can RI and FI (reverse integrate, forward integrate, respectively.. those are the terms we use for moving changes from a descendante branch upstream or moving fixes in a parent branch downstream) quickly and efficiently but there are reasons not to. The 99 USENIX paper on the MS internal SCCS talks about some of these issues. For isntance - what good is there in propogating a fix to every sub-tree or branch in a matter of minutes when it subtly breaks 80% of them?

      The issue with lots of branching isn't the SCCS. It is the gating that you say "should be possible". Not only is it possible - its standard procedeure. And as your code gets closer to the root of the tree, the quality gates get harder to pass through. The latency involved in turning the crank on a regression test in Windows is very high, and if you got it wrong, the latency of a build is high, etc etc.

      So it's not the underlying SCCS, it's the processes built on top of it. Everyone hates process when it slows them down and everyone wants more process when someone else breaks them. "We should put a process in place to prevent that guy from breaking me, but uh, i should be exempt".

      As an aside, there are "fast track" branches/processes that let critical changes move through the tree very quickly.. on the order of a day or two from developers workstation to something that shows up in the next main-line build that an admin assistant could install.

      When I work with our repository, which is on the order of 10GB and a few hundred thousand files, a new branch create takes a few minutes. Pulling down the repository takes hours. Our churn rate is such that with a handful of developers, ~5 days worth of changes can take 30mins to sync down.

      When I RI or FI, it happens only in my client view. This gives me a chance to do merge resolution, and then to build and run our regression tests before "infecting" the target branch with potentially bad code. If building takes on the order of hours (not minutes), you've got latency of hours above the actual RI/FI time. If running tests takes hours (not minutes), you've got more latency. If after a build + test cycle, you see an integration problem, now you've blown a day before you've even found the problem.

      I don't mean to say that there aren't problems, i'm just pointing out that like most process problems, this is death by 1000 cuts. The SCCM isn't a key limitation - even for the windows project (at least, not to my knowledge).

      What you read was that the SCCm sucks. What I'm hoping to illustrate is that the process is unweildy at times, not due to any particular technology limitation.
  • by MrCrassic ( 994046 ) <deprecated@@@ema...il> on Monday November 27, 2006 @12:52PM (#17003840) Journal

    I've read other blogs in regards to Windows Vista, and from what I am gathering the primary reason why Windows Vista took so long to complete was because of management. Philip Su argued how the gargantuan amount of code included in Vista slowed it development dramatically, however I think that this strengthens my point and the point made in this article.

    However, I'm not terribly surprised that this occurred for Vista. The higher execs at the company wanted Vista to be a revolution and had a clear and concise goal that they wanted this operating system to achieve. In order to do this, from what I've read, they needed to form many more separate divisions inside of the Windows division to concentrate on small parts of the operating system. This probably sounded like a good idea, but the problem was that none of their work was in sync with each other. Some had more work completed than others. Furthermore, rifts within divisions such as the one present here spurred disagreement after disagreement, that including the decision to switch the codebase of the OS to the one present in Server 2003 (something that from what I understand should have been decided from the beginning). With all of this, it was only inevitable that confusion and miscommunication would occur.

    All in all, while I think Windows Vista is definitely more capable than Windows XP and warrants itself a much needed upgrade, I feel that the actual improvements of the operating system [wikipedia.org] do not warrant a five-year delay. Okay, so the compositing manager, networking stack, and audio stack may have needed some time to complete, but five-years? I am not a programmer, so my impression may not carry a lot of weight, but being that Linux and UNIX based systems have already included some of these "future technologies," it becomes naive to deem this delay as acceptable.

  • by hklingon ( 109185 ) on Monday November 27, 2006 @01:04PM (#17004062) Homepage
    Ok. I've been running vista on one machine or another for a while.. since early beta.. and am now running the release version on my main machine. There are quite a few headscratchers in here. I often tell my colleagues I'm like the little kid from the 6th sense.. except instead of dead people I see bugs. Things that annoy the crap out of me that have been there at least one maybe two versions of windows ago.

    In the past days of clicking through endless options and dialogs to configure things such as encryption certificates, etc I often wondered if this was really better than editing a single line in an easy-to-find text file.

    Start menu? Hardly ever used the damn thing. Shortcut keys with and I put the quicklaunch bar off to one side with the 40 or so frequently used programs I use.

    Vista doesn't support dragging the quicklaunch bar off of the stat menu and off to one side because it was "confusing to end users." No one seems to have found a registry override as yet.

    Vista doesn't handle symlinks properly. It used to be "c:\documents and settings" but now in vista it is c:\users. I see a clever little "C:\documents and settings" shortcut on my C drive. OOOOoo is this a symlink? No? I get Access Denied when trying to double-click. Opening the path via an API however works fine. Go figure.

    BUGS. Features? Half-Features? Call them what you want. I think most technical folks that have to work on this know these problems exist but architecturally or bureaucratically they are hard or impossible to fix.

    Often on XP, 2000, NT and 95 I would hit control-esc then R for run and type frequently used programs into run. I would say this is just an odd quirk about me and how I think menus take too long and too much work to do something, but now the run area has been replaced with a little place you type in stuff and through the magic of windows desktop search it finds whatever you type in the area above that normally occupied by program icons. The bug? You have to let it search. No matter what. Yeah, WTF? This works great on a home PC where you maybe have maybe 10,000 files. Network drives? Oh no. You can't just type n:\ then hit enter. You have to physically wait a sec for it to pull up n:\ in the list of programs above the start menu THEN hit enter. WOW, WHAT A GREAT FEATURE. No more control-esc n:\ enter for me. It is nowctrl+esc n:\ wait..wait..wait.. enter. Otherwise I get some random program like Notepad. Or Flash. Or Firefox.

    On the one hand I can see how the start menu splaying itself all over your screen as you "drill down" to whatever the hell obscure program you need might be unappealing. On the other hand confining the entirety of all programs available to you to a 400x600 pixel window doesn't seem like a good fix.

    This is just the start menu. Don't even get me started on the new file explorer, which is the least half-baked area of Vista in my opinion. Does Slashdot have an option for submitting a rant and getting comments? I'm sure I could go on all day.

    I take all this as evidence that a lot of new features in vista are based on good ideas.. new paradigms in UI design.. it just seems that the vast majority are implemented poorly at best and implemented recklessly at worst. I would not expect this in 2006 when others are able to produce such polished and solid OSs. I would have to agree this seems like code-rot from the inside out probably due to the megalithic internal structure at MS

    • Re: (Score:3, Insightful)

      by EvanED ( 569694 )
      Often on XP, 2000, NT and 95 I would hit control-esc then R for run and type frequently used programs into run

      Don't you have a Windows key? Win-R. One chord instead of two, and a less akward stretch than ctrl-esc if you do it with one hand. The Windows key sucks when gaming, and if you're a Model M fan you won't have one, but those are the only two arguments I can think of against it, because it really is useful. I personally use Win-E (open Explorer) and Win-L (Lock) routinely.

      Maybe win-r will still work f
    • Does Slashdot have an option for submitting a rant and getting comments?

      You're already using it. Go right ahead...

  • by plopez ( 54068 ) on Monday November 27, 2006 @01:06PM (#17004096) Journal
    Does nayone have any info on how the OS X team works? I mean in a few years Apple did a complete paradigm shift from OS 9 to OS X on the OS level. I would be interesting to see what, if anything, they are doing better. Links or experiences would be nice.

    And while I am at it, the start menu requires input from the kernal team. WTF? This is violating some very basic software design principles. The OS should just be basic services, then the applications, including the UI, should ride on top of the kernal without really caring much about how the kernal works.

    I can see integration with the shell, but the kernal? It looks like MS policy of tight OS integration with the applications is biting them *hard*.
  • by sjonke ( 457707 ) on Monday November 27, 2006 @01:12PM (#17004218) Journal
    Wait until you read about the development of the "About" menu item!
  • by dtjohnson ( 102237 ) on Monday November 27, 2006 @01:23PM (#17004436)
    IBM is terminating the final remnants of their OS/2 staff at the end of December, 2006 as OS/2 takes its last few agonized dying breaths. What's interesting, though, is that over the last 5 years, there has been a skeleton crew of OS/2 people at IBM to support the last few OS/2 customers and this tiny crew was able to accomplish a lot of stuff to keep OS/2 updated and running on current hardware that a much larger crew probably could not have. They were even able to add a lot of stuff that was never even included in the last 'official' Warp 4 release such as the logical volume manager, journaling file system, updated kernel for multicore AMD, USB 2.0 support, UDF DVD support, etc. In this case, a small crew could do a lot more than a large staff and the final dying remnants of the OS/2 business at IBM became more like the original tiny Windows group at Microsoft.
  • by Mattintosh ( 758112 ) on Monday November 27, 2006 @01:24PM (#17004454)
    The UI isn't all that terrible. Joel Spolsky is making a mountain out of a molehill. Look at the screenshot he gives in his article. Here's what I notice:

    1) There's a power button. That shuts things down fully. ("I am going away from my computer now, but I'd like the power to be really off.")
    2) There's a lock button. That leave it running, but keeps others out of your stuff. ("I am going away from my computer now.")
    3) There's a menu of choices if you care to look at it, and the button is much smaller than the other two and has a nondescript arrow icon on it which makes it much less attractive to non-techie users.

    Yes, his suggestions for combining lock with switch user and sleep with hibernate are good, but I don't think what they actually implemented is all that difficult to understand. His problem is that he's "one of us" and went looking for all the extra options. Most people will never click that arrow to make that menu appear. Ever. It's kind of unfair, even to Microsoft, to rag on something for being unfriendly to non-techies when non-techies are never going to even see it. Usually Joel Spolsky's observations are spot-on, but this time I'm going to have to give him an F for eFfort.
    • Re: (Score:3, Insightful)

      by sethadam1 ( 530629 )
      Most people will never click that arrow to make that menu appear.
      That's the worst kind of interface design. If most people will never click it, why display it so prominently? Some options, like "Administrative Tools" had to be intentionally toggled to even be displayed. If your starting assumption is that people won't use it, why show it at all?
    • by dr00g911 ( 531736 ) on Monday November 27, 2006 @03:30PM (#17006490)
      Don't let the icon fool you. The "power" button is a "deep sleep" button in disguise.

      You have to click three more times to find the true shut down or restart, and if you forget you've got to wait around 90 seconds for the machine to hibernate and resume. Before you can actually shut down or restart properly.

      Don't get me started about some of the other UI choices made in just the start menu. The limited programs scrolling area, for example, takes a nasty interface and makes it utterly unusable for someone who has more than MS Office loaded.

      Hollow eye candy that makes the machine run like a slug, and to add insult to injury it's eye candy with horrid usability that takes upwards of 40% of my processing power and frame rate compared to XP SP2.
  • by EvilMonkeySlayer ( 826044 ) on Monday November 27, 2006 @01:55PM (#17005012) Journal
    Would you have a 30 page argument on the merits of sleep vs. hibernate...
  • by Lonewolf666 ( 259450 ) on Monday November 27, 2006 @02:15PM (#17005288)
    There was one quite interesting post on Moishe Lettvin's blog (emphasis mine):

    disclaimer - I was a manager at Microsoft during some of this period (a member of the class of 17 uninformed decision makers) although not on this feature, er, menu.

    The people who designed the source control system for Windows were *not* idiots. They were trying to solve the following problem:
    - thousands of developers,
    - promiscuous dependency taking between parts of Windows without much analysis of the consequences
    --> with a single codebase, if each developer broke the build once every two years there would never be a Longhorn build (or some such statistic - I forget the actual number)

    There are three obvious solutions to this problem:
    1. federate out the source tree, and pay the forward and reverse integration taxes (primarily delay in finding build breaks), or...
    2. remove a large number of the unneccesary dependencies between the various parts of Windows, especially the circular dependencies.
    3. Both 1&2
    #1 was the winning solution in large part because it could be executed by a small team over a defined period of time. #2 would have required herding all the Windows developers (and PMs, managers, UI designers...), and is potentially an unbounded problem.

    (There was much work done analyzing the internal structure of Windows, which certainly counts as a Microsoft trade secret so I am not at liberty to discuss it)

    Note: the open source community does not have this problem (at least not to the same degree) as they tend not to take dependencies on each other to the same degree, specifically:
    - rarely take dependencies on unshipped code
    - rarely make circular dependencies
    - mostly take depemdencies on mature stable components.

    As others have mentioned, the real surprise here is that they managed to ship anything.

    Now I'm not a Microsoft employee, but even as an outsider I've seen some hints that it might be the "promiscuous dependency taking" that has delayed Vista.

    1) Integration of Internet Explorer.
    Microsoft claims that IE and Windows are inextricably linked together, and at least for Windows 2000 and newer this seems to be true. For instance, if you type a URL into the address bar of the Windows Explorer, it will show you web pages. IMHO a stupid design, the web browser should be an application, not a fixed part of the GUI.

    2) The RPC service being responsible for things a "remote procedure call service" has no business handling.
    In August 2003, a worm called MSBlast spread by exploiting a buffer overflow in the DCOM RPC service (see Wikipedia, http://en.wikipedia.org/wiki/MSBlast [wikipedia.org]). At that time me, trying to be clever, thought:
    "I don't want anyone remotely executing stuff on my PC anyway. I'll just switch the service off and be fine".
    Lo and behold:
    After turning off the RPC service, various local functions were dead as well. Including the Services menu in the control panel. I was lucky that I could reactivate the RPC service by manually editing the registry, else I would have spent a day reinstalling.

    So it seems quite believable that Microsoft is choking itself by lack of discipline in designing Windows ;-)
  • by ElephanTS ( 624421 ) on Monday November 27, 2006 @02:20PM (#17005358)
    You can tell something is very wrong when the lamentable Zune software doesn't work properly (or at all) in Vista beta. I mean what the hell is going on? How could they be this far wrong?
  • by DECS ( 891519 ) on Monday November 27, 2006 @02:20PM (#17005360) Homepage Journal
    From RoughlyDrafted's Leopard vs Vista 5: Development Challenges [roughlydrafted.com]

    "In an almost spooky series of events, Microsoft has shadowed Apple's brush with death, making the exact same set of moves exactly ten years after Apple:

    • In the mid 90s, Microsoft rapidly built upon its past success with MS-DOS to establish Windows as a vast empire ...just as Apple used the success of the Apple II as a stepping stone to launch the Mac in the mid 80s.
    • From 1995 to 2001, Microsoft rapidly delivered advancements to its desktop Windows product ...just as Apple rapidly advanced the Mac System Software from 1985-1991.
    • In 2001, Microsoft began announcing technologies that would be released as part of Longhorn and later Blackcomb ...just as Apple described new technologies intended for Copland and Gershwin a decade prior.
    • From 2002-2006, Microsoft dropped features, changed plans, and started over several times in protracted efforts to ship Longhorn ...just as Apple had fumbled around with Copland ten years earlier.
    • By 2006, it was obvious that Microsoft's Longhorn was not going to live up to the hype, and would really be just a refresh of the existing Windows XP ...just as Copland had been gutted in 1996 and its salvaged remains delivered as the optimistically named Mac OS 8.
    • Microsoft outed Blackcomb as vaporware ...just as Apple admitted that Gershwin had never been anything but a list of deferred goals ten years earlier.
    What's Next? The only difference between Apple and Microsoft is that today, in the final days of 2006, there is no equivalent to a 1996 NeXT waiting in the wings to swoop down and fix Microsoft's mess. Leopard vs Vista 5: Development Challenges [roughlydrafted.com]
  • by Kelson ( 129150 ) * on Monday November 27, 2006 @04:13PM (#17007104) Homepage Journal
    And I thought that thing was complicated enough just with just the Log Out/Switch/Sleep/Shutdown options! No wonder it's taking so long!

"One day I woke up and discovered that I was in love with tripe." -- Tom Anderson

Working...