Forgot your password?
typodupeerror
Microsoft Operating Systems Software Windows

Why Vista Took So Long 761

Posted by kdawson
from the changing-lightbulbs dept.
twofish writes, "Following on from Joel Spolsky's blog on the Windows Vista shutdown menu, Moishe Lettvin, a former member of the Windows Vista team (now at Google) who spent a year working on the menu, gives an insight into the process, and some indication as to what the approximately 24 people who worked on the shutdown menu actually did. Joel has responded in typically forthright fashion." From the last posting: "Every piece of evidence I've heard from developers inside Microsoft supports my theory that the company has become completely tangled up in bureaucracy, layers of management, meetings ad infinitum, and overstaffing. The only way Microsoft has managed to hire so many people has been by lowering their hiring standards significantly. In the early nineties Microsoft looked at IBM, especially the bloated OS/2 team, as a case study of what not to do; somehow in the fifteen year period from 1991–2006 they became the bloated monster that takes five years to ship an incoherent upgrade to their flagship product."
This discussion has been archived. No new comments can be posted.

Why Vista Took So Long

Comments Filter:
  • by Paul Crowley (837) on Monday November 27, 2006 @01:49PM (#17003804) Homepage Journal
    From the article:

    "Windows has a tree of repositories: developers check in to the nodes, and periodically the changes in the nodes are integrated up one level in the hierarchy. At a different periodicity, changes are integrated down the tree from the root to the nodes. In Windows, the node I was working on was 4 levels removed from the root. The periodicity of integration decayed exponentially and unpredictably as you approached the root so it ended up that it took between 1 and 3 months for my code to get to the root node, and some multiple of that for it to reach the other nodes."

    Monotone, BitKeeper, git, bzr, and so on would all handle this situation efficiently and gracefully; all the repositories can sync to each other and none need be more than a few minutes out of date. Amazing that Microsoft's solution is so poor by comparison
  • by neoform (551705) <djneoform@gmail.com> on Monday November 27, 2006 @01:49PM (#17003806) Homepage
    Maybe that's why ID http://en.wikipedia.org/wiki/Id_Software [wikipedia.org] still only has 31 employees?
  • by Overly Critical Guy (663429) on Monday November 27, 2006 @01:53PM (#17003856)
    Microsoft needs a Steve Jobs-ian spring cleaning. For those unaware, when he returned to Apple, he called project leaders into a conference room and had them justify their existence. If they couldn't do it, the project was scrapped. The company was streamlined to focus on a few core product lines.
  • by BSAtHome (455370) on Monday November 27, 2006 @01:55PM (#17003894)
    Sounds like Parkinson's law [wikipedia.org]. Every large organisation eventually falls for it too.
  • by Scarblac (122480) <slashdot@gerlich.nl> on Monday November 27, 2006 @01:58PM (#17003934) Homepage
    Nintendo [wikipedia.org]? It's 117 years old, and able to release a much hyped console.
  • by carlsbl (811222) on Monday November 27, 2006 @02:03PM (#17004030)
    These layers of complexity are added to many Vista functions, copying files, burning CDs, running applications that have not received Microsofts blessing, video/desktop settings...ugh! I give up. Give my my nice functional XP anyday. XP was the best thing they have done so far. I don't think I'll say that about Vista. I'll miss XP when I am forced to upgrade. I am an IT implentor and I am going to pesonally kill any move to Vista for as long as I can. My users will hate it, they just want to do their jobs, not relearn how to use a computer. I ran Vista 3 weeks and last weekend I hung it up and "upgraded" back to XP.
  • by Salsaman (141471) on Monday November 27, 2006 @02:06PM (#17004104) Homepage
    Microsoft is trying to take their time and putting in extra effort to make this release literally the best Windows release to date, because the last thing they want is another Windows ME.

    From the reviews I have read, Vista will be another Windows ME. Not that it bothers me, I have been free of Microsoft's trashy products since 1998.

  • by October_30th (531777) on Monday November 27, 2006 @02:08PM (#17004146) Homepage Journal
    PM would say, "the shell team disagrees with how this looks/feels/works" and/or "the kernel team has decided to include/not include some functionality which lets us/prevents us from doing this particular thing".
    That sounds a lot like the Linux development model.
  • by raehl (609729) <raehl311&yahoo,com> on Monday November 27, 2006 @02:09PM (#17004162) Homepage
    Newsflash: Large software project takes a long time.

    Did it occur to anyone that maybe, just maybe, a software project of this magnitude just takes this long to complete?

    What's to say it should take less time? Management schedule? Isn't that wrong by definition?
  • by Shisha (145964) on Monday November 27, 2006 @02:11PM (#17004200) Homepage
    By default, you mean the way it's _supposed_ to work? I don't know a single Windows user who uses the sleep feature on a regular basis, because it's not 100% reliable. With this sort of thing even a 95% reliability is going to put you of. Of the few people I know who use a Mac laptop, I don't know a single one who doesn't just close the lid.
  • by rlp (11898) on Monday November 27, 2006 @02:11PM (#17004214)
    Nintendo? It's 117 years old, and able to release a much hyped console.

    It's changed business models a few times. It started out as a playing card company. If you want to discuss a successful long-lived organization - look at the Catholic church. It's been around for two thousand years. It's got just a few layers of management and at the top 183 cardinals report to the Pope.
  • Re:Huh? (Score:5, Interesting)

    by MBCook (132727) <foobarsoft@foobarsoft.com> on Monday November 27, 2006 @02:16PM (#17004294) Homepage

    Well with a Desktop you can suspend to disk and then come back rather quickly, with a power off in between. This way you get the power savings, but you also get the fast "boot" time.

    But let's look at me. I had a Dell laptop at school. I'd use it at home. Turn it off. Take it to school. Turn it on for class. Use it. Turn it off. Take it to next class/home and repeat. Suspend was very iffy (and didn't help much in the battery life department).

    Then I got a Powerbook G4 (which I still use today). Run it at home. Close the lid. Take it to school. Open the lid. IT WAS READY. Within 3 seconds I could start working. When I'm done? No "Start->This->that" to be sure it worked. Just close the lid. I know some PCs worked that way, mine never did (reliably) that I remember. Next class/home? Open the lid. If it got low on power, I'd plug it in. My little laptop has had up to 3 months of uptime (mostly due to major security updates that require restarts). I NEVER need to turn it off. The last time I did was when I was going on an airplane (didn't know if they'd like it suspended during takeoff/landing). It boots relatively fast, but nothing compared to waking up and going to sleep.

    If you're a desktop user, I understand your comment. But as a laptop user who has had the pleasure of a Mac, a fast reliable suspend is a HUGE time saver.

    Now I'll note that some other people at my school had newer laptops that could suspend/resume just fine. But they took much longer. Some of them approached boot time length, some could do it in 20-30 seconds. No PC there matched my Mac (note: I never asked the few Linux users if they had it working on their laptops). I could suspend/resume my Mac 3 times with ease in the time it took the fastest XP users (and I'll ignore the "Click here to sign on" screen most of them didn't disable).

  • Re:Huh? (Score:2, Interesting)

    by PrescriptionWarning (932687) on Monday November 27, 2006 @02:20PM (#17004388)
    waiting. People HATE waiting. Especially when it comes to machines, where they're supposed to be so fast they're supposed to always be waiting for you to do something. Take a look at the iPod and mobile phones - wait time is never more than a few seconds (give or take a few seconds) even when the machine has been fully turned off. People expect this of PCs also.

    This is why Sleep and Hibernate are such big items. Personally I hate sleep mode, because hibernate completely turns the machine off and generally brings the machine back in about 10-15 seconds.

    however, the big issue is boot-up time from a non-hibernated system. Sure, on a fresh install of Windows you might see everything up and ready in 20 seconds, which should be fine for most, but once you've used the machine for almost a year and have all that extra crap running on your PC, that time is taken to over 2 minutes to wait for all the services to start sometimes (on an average laptop).

    Obviously a couple minutes isn't bad to the patient type who can turn the PC on, walk off and do something else, then come back later... but the average user isn't patient and wants it a minute before they turned it on.

    So what is there to do about it? I say, work on the Hibernation, make it the default and almost only option, and make it fast. Hard drives are more than big enough to store the extra RAM, fast enough (especially with flash hyrbrids), and you can still unplug the PC without a fuss. If there's one other option that should be included, it should be a almost-idle mode. This way, Instant Messagers and downloaders can still work away while you're away, but since these only need 25% or so of the PCs processor speed, it should only require 25% of the power it usually draws. Have it slow down the Hard drive speeds as well and it should be set!
  • by dtjohnson (102237) on Monday November 27, 2006 @02:23PM (#17004436)
    IBM is terminating the final remnants of their OS/2 staff at the end of December, 2006 as OS/2 takes its last few agonized dying breaths. What's interesting, though, is that over the last 5 years, there has been a skeleton crew of OS/2 people at IBM to support the last few OS/2 customers and this tiny crew was able to accomplish a lot of stuff to keep OS/2 updated and running on current hardware that a much larger crew probably could not have. They were even able to add a lot of stuff that was never even included in the last 'official' Warp 4 release such as the logical volume manager, journaling file system, updated kernel for multicore AMD, USB 2.0 support, UDF DVD support, etc. In this case, a small crew could do a lot more than a large staff and the final dying remnants of the OS/2 business at IBM became more like the original tiny Windows group at Microsoft.
  • by displaced80 (660282) on Monday November 27, 2006 @02:44PM (#17004814)
    Ah... you're describing my daily morning game of Russian Roulette.

    I stubbornly refuse to shut down using any other manner than the one I find most convenient: Hibernate.

    It'll work fine for a while. Long enough for comfort to begin to set in. But there's always that little increase in my pulse-rate when I drop my laptop into the docking station on my desk and hit the power button. The Resuming Windows bar moves across the screen. Fingers are crossed, and I turn to face Mecca whilst gripping a rabbit's paw for good luck. The screen goes black. Will my desktop appear? The wind's northerly, so the chances are good. Woohoo! It's worked! I've dodged the bullet this time...

    However, every now and then... not often enough for me to abandon hibernation, but just often enough to keep things interesting... The machine will sit with the Resuming Windows bar full, or at the black screen after the bar... and go no further. I'll go get a coffee and sometimes it'll go through to the desktop. But then there's the times when it'll just be stuck there. Hold the power button, turn it back on, tell it not to delete restoration data and try again... No joy? Shut down again. Pull the USB connections and try again. Fails? Pull the ethernet cable and try again. No luck? Try plugging things into different USB ports...

    Eventually, it'll work. But sometimes this feature is just plain borked. Completely unable to diagnose exactly what's causing it. Sometimes the saved session will have no apps open - just the bare desktop - and it'll still fail to resume. Totally random as far as I can see, which suggests it's something deep down in the crapitude of Windows' internals that's locking... something freaky going on with device initialisation I suppose.

    Of course, being a Windows dev whose frequently eye-deep in XP's guts, I look at these problems as a father whose wayward son just won't get a clue would. It's just how it is. But... from an end-user point of view, if you're going to have a suspend and resume feature (be it sleep, hibernate, etc) it must work right 99.9999999% of the time. It simply must -- it's a critical time for the user's data, and the feature must behave as described. Either that, or the description of the feature should carry a caveat right there in the UI that activates it.

  • by hey! (33014) on Monday November 27, 2006 @02:50PM (#17004898) Homepage Journal
    From the famous Halloween Memo II :


    The biggest future issue for Linux is what to do once they've reached parity with UNIX. JimAll used the phrase "chasing taillights" to captures the core issue: in the fog of the market place, you can move faster by being "number 2 gaining on number 1" than by being number 1.


    Conversely, when you are far enough past your competition, you have to decide where you want to go. Microsoft's business vision looks backwards (defensive) and sidewards (leveraging its unique position in desktop os and office software to gain entry into new markets and new revenue streams). They don't seem to be looking where they are going, because they're already where they want to be.
  • by bmajik (96670) <matt@mattevans.org> on Monday November 27, 2006 @02:53PM (#17004974) Homepage Journal
    I work on a different project (not windows) and use the same repository system. (not the same actual repository, of course)

    The branching / merging etc in the tool set (which btw we didn't invent, we source licensed from someone else and then have been continually improving) are quite good actually.

    I don't know for a fact that the systems you mention arne't "up to the job", but how many multi-TB bitkeeper repositories are there? How many concurrent developers do any of these support? How many branches? How often are RI/FI done? How often do developers sync? What is the churn rate?

    I think you also don't understand the problem. The SCCS can RI and FI (reverse integrate, forward integrate, respectively.. those are the terms we use for moving changes from a descendante branch upstream or moving fixes in a parent branch downstream) quickly and efficiently but there are reasons not to. The 99 USENIX paper on the MS internal SCCS talks about some of these issues. For isntance - what good is there in propogating a fix to every sub-tree or branch in a matter of minutes when it subtly breaks 80% of them?

    The issue with lots of branching isn't the SCCS. It is the gating that you say "should be possible". Not only is it possible - its standard procedeure. And as your code gets closer to the root of the tree, the quality gates get harder to pass through. The latency involved in turning the crank on a regression test in Windows is very high, and if you got it wrong, the latency of a build is high, etc etc.

    So it's not the underlying SCCS, it's the processes built on top of it. Everyone hates process when it slows them down and everyone wants more process when someone else breaks them. "We should put a process in place to prevent that guy from breaking me, but uh, i should be exempt".

    As an aside, there are "fast track" branches/processes that let critical changes move through the tree very quickly.. on the order of a day or two from developers workstation to something that shows up in the next main-line build that an admin assistant could install.

    When I work with our repository, which is on the order of 10GB and a few hundred thousand files, a new branch create takes a few minutes. Pulling down the repository takes hours. Our churn rate is such that with a handful of developers, ~5 days worth of changes can take 30mins to sync down.

    When I RI or FI, it happens only in my client view. This gives me a chance to do merge resolution, and then to build and run our regression tests before "infecting" the target branch with potentially bad code. If building takes on the order of hours (not minutes), you've got latency of hours above the actual RI/FI time. If running tests takes hours (not minutes), you've got more latency. If after a build + test cycle, you see an integration problem, now you've blown a day before you've even found the problem.

    I don't mean to say that there aren't problems, i'm just pointing out that like most process problems, this is death by 1000 cuts. The SCCM isn't a key limitation - even for the windows project (at least, not to my knowledge).

    What you read was that the SCCm sucks. What I'm hoping to illustrate is that the process is unweildy at times, not due to any particular technology limitation.
  • by k12linux (627320) on Monday November 27, 2006 @03:25PM (#17005424)
    A true geek probably wouldn't bother with something that took 2-3 mouse-clicks to do if there was a keystroke-combo that did the job. The problem is the semi-geek who wants to have every option available but can't remember something slightly esoteric like "hold shift when you click the power button icon" to access those "advanced" features.

    To appease this type of geek wannabe, MS makes all 7 options available via the shut down menu. However, if the "power" and "lock" icon do what they seem they would do, then what's the beef. Does the fact that you *can* click the little arrow to access 5 more options cause convulsions in the techno-illiterate crowd? I have more of an issue with the "on/off" icon if the point is to make things easy for non-geeks since many have no clue what that means.
  • Vista = ME 2 (Score:1, Interesting)

    by shashark (836922) on Monday November 27, 2006 @03:33PM (#17005528)
    Vista - as almost every stewie loving brian beating kid would know - IS Windows ME 2.
  • by ryanw (131814) on Monday November 27, 2006 @03:49PM (#17005818)
    The thing that is interesting to me is looking at not just the market share, but the market sales.

    http://arstechnica.com/news.ars/post/20061019-8028 .html [arstechnica.com]

    It goes DELL, HP, GATEWAY, then APPLE.

    People tend to buy into the whole branding thing. People aren't as clear as Mac or PC users. People are either a DELL user or GATEWAY or HP or APPLE or IBM or Toshiba or ETC. Apple has always been the leader in the creative world. Technology of today is allowing even average people to become more creative. With more average people thinking they're creative, this will drive people to buy the 'creative platform of choice'. A mac.

    It would seem a few years ago I was the only mac user in my group of friends. It now seems every single one of my friends has either a mac in ADDITION to their PC or have exchanged their PCs for Macs. These are interesting times. I only HATE microsoft because I used to lead a life of tech support for my job and friends and family. Friends and family always used to come to me to help them with their myrid of problems. Every incompetent windows user has a somewhat savey techie behind them formating their drive, installing windows, cleaing up viruses, installing programs, fixing things, etc. I got sick of being that person. I tell people now to buy macs. They buy a mac and generally just use their computer to get things done. No more fuss.

    [rant]

    If microsoft can ever prove to me that their applications can do what they promise then I will jump on the microsoft bandwagon. Prove to me that updates will no longer crash my machine, prove to me that re-installing my operating system (which seems to occur frequently with microsoft) isn't going to take 2 hours of loading and 4 more hours of installing fixes -- patches -- updates -- combined with 35 reboots. It's the reboots that are so dang painful. To click on a patch and watch all the other patches you just clicked all go 'grey' and have a dialog box pop up that says, "Sorry, this patch has to be installed individually." BUT EVERY PATCH has to be installed individually. What the hell? Prove to me that your operating system can run for 2 years without having to be reinstalled for some random reason to get the speed of the machine back to what it used to be.

    [sigh] . . . [/sigh]

    My beef with microsoft is real and valid. I have now been running a mac exclusively for just around 4 years now. My latest mac is about 1 or 2 years old. I got it from the apple store pre-loaded with OSX 10.4. I have yet to re-install it. Has run perfect just as expected this whole time. Sure, a mac has it's qwerks, but if you're sick of microsoft, the apple qwerks are much fewer and far between than dealing with microsoft's.

    [/rant]
  • by oaklybonn (600250) on Monday November 27, 2006 @04:00PM (#17006034)
    I used to work at Apple, in the OS and frameworks groups.

    There is a master "train" for a release; projects that don't change are "forwarded" to that train, meaning no source changes are required. When a project needs to be submitted for a change for the new release, a new "view" is created for its specific changes. Every few days, a build is produced, sometimes using previously compiled bits from the old "train", sometimes its a full world build (which can take several days) but otherwise building all the latest submissions.

    Then there's a fairly labor intensive "integration" phase where the built bits are all put on a box and booted. If a "quicklook" QA process shows that the build is hoarked, the integrator goes and pesters the submitters of the latest project that was submitted and gets them to fix it. (Some percentage of the time, the new code has exposed a bug elsewhere, regardless, the project that is the proximal cause of the failure is rolled back to the previous revision, it anticipation that all the projects that need to rev be submitted at once.)

    The whole thing is set up through symlinks via NFS, so if you want to see the latest version of any piece of code in the system (modulus those projects that are "locked down" for security issues) you can just get your release name, append the build number, and you've got the source code, symbol'd binaries and build log *for any release* at your fingertips.

    When a new build comes out, you just do a clean install. It takes about two hours on the internal network, so typically you pull the disk image and slam it to a firewire drive, (usually, you can bum a disk with the image already grabbed from a teammate) and do a full install in 15 minutes. I can't imagine having to spend a day (as some other posted mentioned) setting up a machine...

    Most projects have 3 or 4 contributors. In many cases, and entire framework is the responsibility of a single person (and he or she may actually own several small frameworks.) Lots of small projects produce cleaner interfaces that lead to fewer dependencies. (Of course there are dependencies, and circular ones, but these are kept to a minimum.) Projects are encouraged to use public API from other projects, rather than SPI or other project internals. If there's something useful enough for some other project to use, its first made into SPI for internal consumption, with the goal that developers will eventually be able to use it through a public API.

    Most groups don't have dedicated QA by the way - the engineers are responsible for their code, and everyone is generally just very smart about what they're doing.

    As to this start menu problem: the entire UI team is about 5 individuals, plus Steve Jobs and Scott Forstall - and they're likely to say "Thats fucking stupid, just do this" and boom(tm), the decision has been made the product ships, and life goes on.
  • by dr00g911 (531736) on Monday November 27, 2006 @04:30PM (#17006490)
    Don't let the icon fool you. The "power" button is a "deep sleep" button in disguise.

    You have to click three more times to find the true shut down or restart, and if you forget you've got to wait around 90 seconds for the machine to hibernate and resume. Before you can actually shut down or restart properly.

    Don't get me started about some of the other UI choices made in just the start menu. The limited programs scrolling area, for example, takes a nasty interface and makes it utterly unusable for someone who has more than MS Office loaded.

    Hollow eye candy that makes the machine run like a slug, and to add insult to injury it's eye candy with horrid usability that takes upwards of 40% of my processing power and frame rate compared to XP SP2.
  • by Maxo-Texas (864189) on Monday November 27, 2006 @04:41PM (#17006646)
    I agree that developers have a problem with this.

    But...

    There is never an ROI on doing code cleanup and making it easier to maintain from a manager / new development programmer's perspective.

    As a maintenance programmer tho... I see faster, more stable, easier to maintain code out of even the little things I manage to sneak in. A solid code cleaning can cut weeks or months off of other projects on the same code base. From everything we've heard- windows source is a mess.

    What they probably need to do is spend 6 months and do an architectural code cleanup. There would be no immediately ROI however every project for the rest of time would benefit so theoretically their ROI is infinite. :)

    As a maintenance programmer, I've frequently taken multiple pages of code out of programs without changing their functionality. In a large number of cases products are shipped by the development staff with dead code, goofy code, very inefficient code, redundant code, etc.

  • by Anonymous Coward on Monday November 27, 2006 @05:28PM (#17007322)
    The code isn't that big of a mess. Really. Windows does need to be refactored to cut down on strong coupling of components to make for faster development IMHO, and that's something that's being worked on.
  • Unit Testing (Score:3, Interesting)

    by buckhead_buddy (186384) on Monday November 27, 2006 @05:40PM (#17007514)
    MidKnight wrote:
    I wholly agree: from the external perspective, it sounded like a lot of the developers fell into the classic S/W development trap: re-write something for the sole reason that "We can make it better this time". Very rarely does this ever fit a customer's actual desires... but developers almost always want to do it anyway (myself included).

    Of course, the decision to not re-write and keep ugly legacy code itself (rather than just the API) isn't always the correct one either. The judgement of what is "best" is tough for managers and coders. Though I've only started to listen to the "pragmatic" arguments for about a year and a half or so, the best thing I've found to answer this question is unit testing. And I don't particularly like writing unit tests.

    If there are unit tests that have already been written, I can see just what sort of implementation problems happened in the past. When I want to re-write code, I'm usually thinking in the stratosphere about how the new approach will make everything better, but looking over unit tests written by other developers often brings me back down to earth and I see that my perfect solution may wind up retreading similar problems in an unfamiliar way. That's even more important when the customer sees an old problem re-surface in new code: they've already been down this road and they'll be out for blood that we're backpedaling and charging them for regression rather than development.

    Since unit tests are a new practice at my work, they aren't always written for legacy code to make this judgement. In that case, I find that forcing myself to sit down and write some unit tests is a good thing. Though writing them is on par with my desire to floss, I have to admit that it is a good practice. It scratches my itch to actually dig into the details and write code. After I've really looked at the failure possibilities, it really helps me make a better decision to rewrite or not. And whether we choose to rewrite now or not, it's useful in the future whether the decision is made to dump or rewrite.

    I am curious about the testing practices for major products like Vista and OS X are standardized and used. I know Microsoft has a huge testing infrastructure, but I wonder if the delays in Vista have been due to too much influence of the testers, or too little, or no net effect at all. I was under the impression that Apple's testing was much better, but some major, obvious regressions lately make me think that perhaps Apple simply has a smaller "legacy" of custom code to support. Do big companies even have sound testing practices and require their use?

    As a final note though, I prefer to write unit tests on other people's code since mine, of course, never needs them :-)

  • Re:Why RTFA? (Score:3, Interesting)

    by steelfood (895457) on Monday November 27, 2006 @05:55PM (#17007804)
    Actually, these are all peripheral problems. These systems could be better, but there are reasons they exist--mainly so that someone doesn't check in something into the code base that causes the machine to eat itself up, and essentially halts the daily testing for a whole day or more while the sysadmins go about restoring all the test systems with the previous day's ghost.

    The main problem is in this line:
    Twenty-four of them were connected sorta closely to the code, and of those twenty four there were exactly zero with final say in how the feature worked. Somewhere in those other 17 was somebody who did have final say but who that was I have no idea since when I left the team -- after a year -- there was still no decision about exactly how this feature would work.


    Anyone with any amount of organizational management experience will tell you that in order for things to happen efficiently, there has to be someone with final say, for better or worse. Decisions cannot efficiently be made by committees, much less the democratic-sounding process that the blog outlines. Someone somewhere has to put his foot down and say, "yes, these are the ideas that have been put forth, these are the arguments for and against those ideas, and this is what we're going to do." It doesn't have to be management. It could be one of the developers. It could be the GUI designers. It could be a tester. But it has to be one person. And the decision has to stick. If upper management doesn't like the resulting conclusion, too bad, they should've picked someone else. It's only when the early testers start to complain that it's worth a second look for redo.

    The nearly as important thing to note is that there are 47 people having a say on this one thing. Why? There should be at most, five people working on the design and implementation of any particular feature. For this, it should be four: one usability person, one GUI designer, one developer from the kernel, and one developer from the start menu team. For features that span more of the OS, several lead developers and maybe a manager to take care of timetables and the likes. But always designate one person to make the final decision!

    Yes, from a development perspective, the whole repository organized like a tree structure has its inefficiencies. But the crux of this particular problem is an organizational one. Having changes propagated quickly isn't going to do any good when the feature hasn't been implemented because the design isn't cemented, or the feature's implementation changes every few days. In fact, having changes propagated slowly would be better if features tend to get constantly redesigned.
  • by DECS (891519) on Monday November 27, 2006 @06:25PM (#17008314) Homepage Journal
    Oh but you forget the decade of slack. Apple in *1995* was making craploads of money, had lots of cash in the bank, and was doodling around with profitless new hardware projects such as the Newton, a TV set top box, hardware licensing and the Pippin console. Win95 didn't come out until the final days of the year, and everyone at Apple was joking about how Win95 was Mac '89.

    Today, Microsoft is similarly loaded, and Windows is under fundamental attack from POSIX, both with Mac OS X on the desktop and Linux on the server. Microsoft similarly has been doodling around inneffectually with a series of failures: Xbox barely outsold the GameCube, the Xbox 360 couldn't even outsell the 5 year old PlayStation 2 this last year [roughlydrafted.com] (6 million vs 11 million). Everything else, from MSN TV to WinCE PDAs (dead market with no growth) and smartphones (Microsoft has 5% of that market with no hope of gaining against Symbian and Linux) to Tablet PCs and Oragami can't be sold at any price. [roughlydrafted.com]

    Microsoft is on deathwatch, and you're complaining that Apple is making record profits on the iPod, a product Microsoft's PlaysForSure couldn't touch in the last five years? Apple sold 60 million iPods, and that's bad? It's all a marketing ruse? Why can't Microsoft spin marketing? Why can't they deliver a consumer electronics product anyone wants? The Zune is a huge joke. $36 Billion should buy something, right?

    Is Microsoft paying you to shill, or are you supporting a failed dinosaur--working to poke the world in the eye--on your on time, just for fun?

    Why Microsoft Can't Compete With iTunes [roughlydrafted.com]

    Apple and Microsoft in Platform Crisis: The Tentacles of Legacy [roughlydrafted.com]

  • by Espectr0 (577637) on Monday November 27, 2006 @08:25PM (#17009852) Journal
    1) There's a power button. That shuts things down fully. ("I am going away from my computer now, but I'd like the power to be really off.")

    The funny thing is that the power button does not turn off the machine. It actually makes it sleep> . A worldwide known symbol for turning off computers gets used to sleep machines.
  • Re:Huh? (Score:2, Interesting)

    by tknd (979052) on Monday November 27, 2006 @09:24PM (#17010386)

    I used to hibernate my desktop machine at work because the IT department forgot to disable it. I thought it was great, I had the benefits of turning off the computer as well as saving the state of my desktop. Add to the fact that the boot time was much faster than a cold boot and I thought it was a huge benefit.

    Later they disabled hibernation and now I can only shutdown or lock the machine. Well, so much for saving electricity. Now I leave it on most of the time. They probably have good reasons (startup scripts and such) but if there was functionality in hibernate to meet their needs I think hibernation could easily save the world lots of money especially when these windows boxes seem to gradually startup slower for some reason. It takes me a good 5 minutes to startup at work, and I can't do a single thing about it except go through the hassle of asking for a new machine. At home of course it's a totally different story.

The flow chart is a most thoroughly oversold piece of program documentation. -- Frederick Brooks, "The Mythical Man Month"

Working...