Why Vista Took So Long 761
twofish writes, "Following on from Joel Spolsky's blog on the Windows Vista shutdown menu, Moishe Lettvin, a former member of the Windows Vista team (now at Google) who spent a year working on the menu, gives an insight into the process, and some indication as to what the approximately 24 people who worked on the shutdown menu actually did. Joel has responded in typically forthright fashion." From the last posting: "Every piece of evidence I've heard from developers inside Microsoft supports my theory that the company has become completely tangled up in bureaucracy, layers of management, meetings ad infinitum, and overstaffing. The only way Microsoft has managed to hire so many people has been by lowering their hiring standards significantly. In the early nineties Microsoft looked at IBM, especially the bloated OS/2 team, as a case study of what not to do; somehow in the fifteen year period from 1991–2006 they became the bloated monster that takes five years to ship an incoherent upgrade to their flagship product."
The modern DVCSs would all do better (Score:4, Interesting)
"Windows has a tree of repositories: developers check in to the nodes, and periodically the changes in the nodes are integrated up one level in the hierarchy. At a different periodicity, changes are integrated down the tree from the root to the nodes. In Windows, the node I was working on was 4 levels removed from the root. The periodicity of integration decayed exponentially and unpredictably as you approached the root so it ended up that it took between 1 and 3 months for my code to get to the root node, and some multiple of that for it to reach the other nodes."
Monotone, BitKeeper, git, bzr, and so on would all handle this situation efficiently and gracefully; all the repositories can sync to each other and none need be more than a few minutes out of date. Amazing that Microsoft's solution is so poor by comparison
Re:Welcome to inevitability (Score:5, Interesting)
Re:Welcome to inevitability (Score:5, Interesting)
Re:Welcome to inevitability (Score:4, Interesting)
Re:Welcome to inevitability (Score:5, Interesting)
This is why I upgraded to XP from Vista (Score:2, Interesting)
Re:Linux development model? (Score:1, Interesting)
From the reviews I have read, Vista will be another Windows ME. Not that it bothers me, I have been free of Microsoft's trashy products since 1998.
Re:Linux development model? (Score:2, Interesting)
Slashdot: Stating the Obvious for Nerds (Score:2, Interesting)
Did it occur to anyone that maybe, just maybe, a software project of this magnitude just takes this long to complete?
What's to say it should take less time? Management schedule? Isn't that wrong by definition?
Re:Sleep vs Hibernate (Score:3, Interesting)
Re:Welcome to inevitability (Score:5, Interesting)
It's changed business models a few times. It started out as a playing card company. If you want to discuss a successful long-lived organization - look at the Catholic church. It's been around for two thousand years. It's got just a few layers of management and at the top 183 cardinals report to the Pope.
Re:Huh? (Score:5, Interesting)
Well with a Desktop you can suspend to disk and then come back rather quickly, with a power off in between. This way you get the power savings, but you also get the fast "boot" time.
But let's look at me. I had a Dell laptop at school. I'd use it at home. Turn it off. Take it to school. Turn it on for class. Use it. Turn it off. Take it to next class/home and repeat. Suspend was very iffy (and didn't help much in the battery life department).
Then I got a Powerbook G4 (which I still use today). Run it at home. Close the lid. Take it to school. Open the lid. IT WAS READY. Within 3 seconds I could start working. When I'm done? No "Start->This->that" to be sure it worked. Just close the lid. I know some PCs worked that way, mine never did (reliably) that I remember. Next class/home? Open the lid. If it got low on power, I'd plug it in. My little laptop has had up to 3 months of uptime (mostly due to major security updates that require restarts). I NEVER need to turn it off. The last time I did was when I was going on an airplane (didn't know if they'd like it suspended during takeoff/landing). It boots relatively fast, but nothing compared to waking up and going to sleep.
If you're a desktop user, I understand your comment. But as a laptop user who has had the pleasure of a Mac, a fast reliable suspend is a HUGE time saver.
Now I'll note that some other people at my school had newer laptops that could suspend/resume just fine. But they took much longer. Some of them approached boot time length, some could do it in 20-30 seconds. No PC there matched my Mac (note: I never asked the few Linux users if they had it working on their laptops). I could suspend/resume my Mac 3 times with ease in the time it took the fastest XP users (and I'll ignore the "Click here to sign on" screen most of them didn't disable).
Re:Huh? (Score:2, Interesting)
This is why Sleep and Hibernate are such big items. Personally I hate sleep mode, because hibernate completely turns the machine off and generally brings the machine back in about 10-15 seconds.
however, the big issue is boot-up time from a non-hibernated system. Sure, on a fresh install of Windows you might see everything up and ready in 20 seconds, which should be fine for most, but once you've used the machine for almost a year and have all that extra crap running on your PC, that time is taken to over 2 minutes to wait for all the services to start sometimes (on an average laptop).
Obviously a couple minutes isn't bad to the patient type who can turn the PC on, walk off and do something else, then come back later... but the average user isn't patient and wants it a minute before they turned it on.
So what is there to do about it? I say, work on the Hibernation, make it the default and almost only option, and make it fast. Hard drives are more than big enough to store the extra RAM, fast enough (especially with flash hyrbrids), and you can still unplug the PC without a fuss. If there's one other option that should be included, it should be a almost-idle mode. This way, Instant Messagers and downloaders can still work away while you're away, but since these only need 25% or so of the PCs processor speed, it should only require 25% of the power it usually draws. Have it slow down the Hard drive speeds as well and it should be set!
...and OS/2 became Microsoft (Score:5, Interesting)
Re:Sleep vs Hibernate (Score:3, Interesting)
I stubbornly refuse to shut down using any other manner than the one I find most convenient: Hibernate.
It'll work fine for a while. Long enough for comfort to begin to set in. But there's always that little increase in my pulse-rate when I drop my laptop into the docking station on my desk and hit the power button. The Resuming Windows bar moves across the screen. Fingers are crossed, and I turn to face Mecca whilst gripping a rabbit's paw for good luck. The screen goes black. Will my desktop appear? The wind's northerly, so the chances are good. Woohoo! It's worked! I've dodged the bullet this time...
However, every now and then... not often enough for me to abandon hibernation, but just often enough to keep things interesting... The machine will sit with the Resuming Windows bar full, or at the black screen after the bar... and go no further. I'll go get a coffee and sometimes it'll go through to the desktop. But then there's the times when it'll just be stuck there. Hold the power button, turn it back on, tell it not to delete restoration data and try again... No joy? Shut down again. Pull the USB connections and try again. Fails? Pull the ethernet cable and try again. No luck? Try plugging things into different USB ports...
Eventually, it'll work. But sometimes this feature is just plain borked. Completely unable to diagnose exactly what's causing it. Sometimes the saved session will have no apps open - just the bare desktop - and it'll still fail to resume. Totally random as far as I can see, which suggests it's something deep down in the crapitude of Windows' internals that's locking... something freaky going on with device initialisation I suppose.
Of course, being a Windows dev whose frequently eye-deep in XP's guts, I look at these problems as a father whose wayward son just won't get a clue would. It's just how it is. But... from an end-user point of view, if you're going to have a suspend and resume feature (be it sleep, hibernate, etc) it must work right 99.9999999% of the time. It simply must -- it's a critical time for the user's data, and the feature must behave as described. Either that, or the description of the feature should carry a caveat right there in the UI that activates it.
Re:Welcome to inevitability (Score:5, Interesting)
Conversely, when you are far enough past your competition, you have to decide where you want to go. Microsoft's business vision looks backwards (defensive) and sidewards (leveraging its unique position in desktop os and office software to gain entry into new markets and new revenue streams). They don't seem to be looking where they are going, because they're already where they want to be.
Re:The modern DVCSs would all do better (Score:5, Interesting)
The branching / merging etc in the tool set (which btw we didn't invent, we source licensed from someone else and then have been continually improving) are quite good actually.
I don't know for a fact that the systems you mention arne't "up to the job", but how many multi-TB bitkeeper repositories are there? How many concurrent developers do any of these support? How many branches? How often are RI/FI done? How often do developers sync? What is the churn rate?
I think you also don't understand the problem. The SCCS can RI and FI (reverse integrate, forward integrate, respectively.. those are the terms we use for moving changes from a descendante branch upstream or moving fixes in a parent branch downstream) quickly and efficiently but there are reasons not to. The 99 USENIX paper on the MS internal SCCS talks about some of these issues. For isntance - what good is there in propogating a fix to every sub-tree or branch in a matter of minutes when it subtly breaks 80% of them?
The issue with lots of branching isn't the SCCS. It is the gating that you say "should be possible". Not only is it possible - its standard procedeure. And as your code gets closer to the root of the tree, the quality gates get harder to pass through. The latency involved in turning the crank on a regression test in Windows is very high, and if you got it wrong, the latency of a build is high, etc etc.
So it's not the underlying SCCS, it's the processes built on top of it. Everyone hates process when it slows them down and everyone wants more process when someone else breaks them. "We should put a process in place to prevent that guy from breaking me, but uh, i should be exempt".
As an aside, there are "fast track" branches/processes that let critical changes move through the tree very quickly.. on the order of a day or two from developers workstation to something that shows up in the next main-line build that an admin assistant could install.
When I work with our repository, which is on the order of 10GB and a few hundred thousand files, a new branch create takes a few minutes. Pulling down the repository takes hours. Our churn rate is such that with a handful of developers, ~5 days worth of changes can take 30mins to sync down.
When I RI or FI, it happens only in my client view. This gives me a chance to do merge resolution, and then to build and run our regression tests before "infecting" the target branch with potentially bad code. If building takes on the order of hours (not minutes), you've got latency of hours above the actual RI/FI time. If running tests takes hours (not minutes), you've got more latency. If after a build + test cycle, you see an integration problem, now you've blown a day before you've even found the problem.
I don't mean to say that there aren't problems, i'm just pointing out that like most process problems, this is death by 1000 cuts. The SCCM isn't a key limitation - even for the windows project (at least, not to my knowledge).
What you read was that the SCCm sucks. What I'm hoping to illustrate is that the process is unweildy at times, not due to any particular technology limitation.
Re:Standard geek viewpoint == standard geek proble (Score:3, Interesting)
To appease this type of geek wannabe, MS makes all 7 options available via the shut down menu. However, if the "power" and "lock" icon do what they seem they would do, then what's the beef. Does the fact that you *can* click the little arrow to access 5 more options cause convulsions in the techno-illiterate crowd? I have more of an issue with the "on/off" icon if the point is to make things easy for non-geeks since many have no clue what that means.
Vista = ME 2 (Score:1, Interesting)
Re:Welcome to inevitability (Score:4, Interesting)
http://arstechnica.com/news.ars/post/20061019-802
It goes DELL, HP, GATEWAY, then APPLE.
People tend to buy into the whole branding thing. People aren't as clear as Mac or PC users. People are either a DELL user or GATEWAY or HP or APPLE or IBM or Toshiba or ETC. Apple has always been the leader in the creative world. Technology of today is allowing even average people to become more creative. With more average people thinking they're creative, this will drive people to buy the 'creative platform of choice'. A mac.
It would seem a few years ago I was the only mac user in my group of friends. It now seems every single one of my friends has either a mac in ADDITION to their PC or have exchanged their PCs for Macs. These are interesting times. I only HATE microsoft because I used to lead a life of tech support for my job and friends and family. Friends and family always used to come to me to help them with their myrid of problems. Every incompetent windows user has a somewhat savey techie behind them formating their drive, installing windows, cleaing up viruses, installing programs, fixing things, etc. I got sick of being that person. I tell people now to buy macs. They buy a mac and generally just use their computer to get things done. No more fuss.
[rant]
If microsoft can ever prove to me that their applications can do what they promise then I will jump on the microsoft bandwagon. Prove to me that updates will no longer crash my machine, prove to me that re-installing my operating system (which seems to occur frequently with microsoft) isn't going to take 2 hours of loading and 4 more hours of installing fixes -- patches -- updates -- combined with 35 reboots. It's the reboots that are so dang painful. To click on a patch and watch all the other patches you just clicked all go 'grey' and have a dialog box pop up that says, "Sorry, this patch has to be installed individually." BUT EVERY PATCH has to be installed individually. What the hell? Prove to me that your operating system can run for 2 years without having to be reinstalled for some random reason to get the speed of the machine back to what it used to be.
[sigh] . . . [/sigh]
My beef with microsoft is real and valid. I have now been running a mac exclusively for just around 4 years now. My latest mac is about 1 or 2 years old. I got it from the apple store pre-loaded with OSX 10.4. I have yet to re-install it. Has run perfect just as expected this whole time. Sure, a mac has it's qwerks, but if you're sick of microsoft, the apple qwerks are much fewer and far between than dealing with microsoft's.
[/rant]
Re:Compare and contrast. (Score:5, Interesting)
There is a master "train" for a release; projects that don't change are "forwarded" to that train, meaning no source changes are required. When a project needs to be submitted for a change for the new release, a new "view" is created for its specific changes. Every few days, a build is produced, sometimes using previously compiled bits from the old "train", sometimes its a full world build (which can take several days) but otherwise building all the latest submissions.
Then there's a fairly labor intensive "integration" phase where the built bits are all put on a box and booted. If a "quicklook" QA process shows that the build is hoarked, the integrator goes and pesters the submitters of the latest project that was submitted and gets them to fix it. (Some percentage of the time, the new code has exposed a bug elsewhere, regardless, the project that is the proximal cause of the failure is rolled back to the previous revision, it anticipation that all the projects that need to rev be submitted at once.)
The whole thing is set up through symlinks via NFS, so if you want to see the latest version of any piece of code in the system (modulus those projects that are "locked down" for security issues) you can just get your release name, append the build number, and you've got the source code, symbol'd binaries and build log *for any release* at your fingertips.
When a new build comes out, you just do a clean install. It takes about two hours on the internal network, so typically you pull the disk image and slam it to a firewire drive, (usually, you can bum a disk with the image already grabbed from a teammate) and do a full install in 15 minutes. I can't imagine having to spend a day (as some other posted mentioned) setting up a machine...
Most projects have 3 or 4 contributors. In many cases, and entire framework is the responsibility of a single person (and he or she may actually own several small frameworks.) Lots of small projects produce cleaner interfaces that lead to fewer dependencies. (Of course there are dependencies, and circular ones, but these are kept to a minimum.) Projects are encouraged to use public API from other projects, rather than SPI or other project internals. If there's something useful enough for some other project to use, its first made into SPI for internal consumption, with the goal that developers will eventually be able to use it through a public API.
Most groups don't have dedicated QA by the way - the engineers are responsible for their code, and everyone is generally just very smart about what they're doing.
As to this start menu problem: the entire UI team is about 5 individuals, plus Steve Jobs and Scott Forstall - and they're likely to say "Thats fucking stupid, just do this" and boom(tm), the decision has been made the product ships, and life goes on.
Re:Mountain != Molehill (Score:4, Interesting)
You have to click three more times to find the true shut down or restart, and if you forget you've got to wait around 90 seconds for the machine to hibernate and resume. Before you can actually shut down or restart properly.
Don't get me started about some of the other UI choices made in just the start menu. The limited programs scrolling area, for example, takes a nasty interface and makes it utterly unusable for someone who has more than MS Office loaded.
Hollow eye candy that makes the machine run like a slug, and to add insult to injury it's eye candy with horrid usability that takes upwards of 40% of my processing power and frame rate compared to XP SP2.
Re:Linux development model? (Score:5, Interesting)
But...
There is never an ROI on doing code cleanup and making it easier to maintain from a manager / new development programmer's perspective.
As a maintenance programmer tho... I see faster, more stable, easier to maintain code out of even the little things I manage to sneak in. A solid code cleaning can cut weeks or months off of other projects on the same code base. From everything we've heard- windows source is a mess.
What they probably need to do is spend 6 months and do an architectural code cleanup. There would be no immediately ROI however every project for the rest of time would benefit so theoretically their ROI is infinite.
As a maintenance programmer, I've frequently taken multiple pages of code out of programs without changing their functionality. In a large number of cases products are shipped by the development staff with dead code, goofy code, very inefficient code, redundant code, etc.
Re:Linux development model? (Score:1, Interesting)
Unit Testing (Score:3, Interesting)
Of course, the decision to not re-write and keep ugly legacy code itself (rather than just the API) isn't always the correct one either. The judgement of what is "best" is tough for managers and coders. Though I've only started to listen to the "pragmatic" arguments for about a year and a half or so, the best thing I've found to answer this question is unit testing. And I don't particularly like writing unit tests.
If there are unit tests that have already been written, I can see just what sort of implementation problems happened in the past. When I want to re-write code, I'm usually thinking in the stratosphere about how the new approach will make everything better, but looking over unit tests written by other developers often brings me back down to earth and I see that my perfect solution may wind up retreading similar problems in an unfamiliar way. That's even more important when the customer sees an old problem re-surface in new code: they've already been down this road and they'll be out for blood that we're backpedaling and charging them for regression rather than development.
Since unit tests are a new practice at my work, they aren't always written for legacy code to make this judgement. In that case, I find that forcing myself to sit down and write some unit tests is a good thing. Though writing them is on par with my desire to floss, I have to admit that it is a good practice. It scratches my itch to actually dig into the details and write code. After I've really looked at the failure possibilities, it really helps me make a better decision to rewrite or not. And whether we choose to rewrite now or not, it's useful in the future whether the decision is made to dump or rewrite.
I am curious about the testing practices for major products like Vista and OS X are standardized and used. I know Microsoft has a huge testing infrastructure, but I wonder if the delays in Vista have been due to too much influence of the testers, or too little, or no net effect at all. I was under the impression that Apple's testing was much better, but some major, obvious regressions lately make me think that perhaps Apple simply has a smaller "legacy" of custom code to support. Do big companies even have sound testing practices and require their use?
As a final note though, I prefer to write unit tests on other people's code since mine, of course, never needs them :-)
Re:Why RTFA? (Score:3, Interesting)
The main problem is in this line:
Anyone with any amount of organizational management experience will tell you that in order for things to happen efficiently, there has to be someone with final say, for better or worse. Decisions cannot efficiently be made by committees, much less the democratic-sounding process that the blog outlines. Someone somewhere has to put his foot down and say, "yes, these are the ideas that have been put forth, these are the arguments for and against those ideas, and this is what we're going to do." It doesn't have to be management. It could be one of the developers. It could be the GUI designers. It could be a tester. But it has to be one person. And the decision has to stick. If upper management doesn't like the resulting conclusion, too bad, they should've picked someone else. It's only when the early testers start to complain that it's worth a second look for redo.
The nearly as important thing to note is that there are 47 people having a say on this one thing. Why? There should be at most, five people working on the design and implementation of any particular feature. For this, it should be four: one usability person, one GUI designer, one developer from the kernel, and one developer from the start menu team. For features that span more of the OS, several lead developers and maybe a manager to take care of timetables and the likes. But always designate one person to make the final decision!
Yes, from a development perspective, the whole repository organized like a tree structure has its inefficiencies. But the crux of this particular problem is an organizational one. Having changes propagated quickly isn't going to do any good when the feature hasn't been implemented because the design isn't cemented, or the feature's implementation changes every few days. In fact, having changes propagated slowly would be better if features tend to get constantly redesigned.
Re:Microsoft: Shadow Stalker (Score:3, Interesting)
Today, Microsoft is similarly loaded, and Windows is under fundamental attack from POSIX, both with Mac OS X on the desktop and Linux on the server. Microsoft similarly has been doodling around inneffectually with a series of failures: Xbox barely outsold the GameCube, the Xbox 360 couldn't even outsell the 5 year old PlayStation 2 this last year [roughlydrafted.com] (6 million vs 11 million). Everything else, from MSN TV to WinCE PDAs (dead market with no growth) and smartphones (Microsoft has 5% of that market with no hope of gaining against Symbian and Linux) to Tablet PCs and Oragami can't be sold at any price. [roughlydrafted.com]
Microsoft is on deathwatch, and you're complaining that Apple is making record profits on the iPod, a product Microsoft's PlaysForSure couldn't touch in the last five years? Apple sold 60 million iPods, and that's bad? It's all a marketing ruse? Why can't Microsoft spin marketing? Why can't they deliver a consumer electronics product anyone wants? The Zune is a huge joke. $36 Billion should buy something, right?
Is Microsoft paying you to shill, or are you supporting a failed dinosaur--working to poke the world in the eye--on your on time, just for fun?
Why Microsoft Can't Compete With iTunes [roughlydrafted.com]
Apple and Microsoft in Platform Crisis: The Tentacles of Legacy [roughlydrafted.com]
Re:Mountain != Molehill (Score:3, Interesting)
The funny thing is that the power button does not turn off the machine. It actually makes it sleep> . A worldwide known symbol for turning off computers gets used to sleep machines.
Re:Huh? (Score:2, Interesting)
I used to hibernate my desktop machine at work because the IT department forgot to disable it. I thought it was great, I had the benefits of turning off the computer as well as saving the state of my desktop. Add to the fact that the boot time was much faster than a cold boot and I thought it was a huge benefit.
Later they disabled hibernation and now I can only shutdown or lock the machine. Well, so much for saving electricity. Now I leave it on most of the time. They probably have good reasons (startup scripts and such) but if there was functionality in hibernate to meet their needs I think hibernation could easily save the world lots of money especially when these windows boxes seem to gradually startup slower for some reason. It takes me a good 5 minutes to startup at work, and I can't do a single thing about it except go through the hassle of asking for a new machine. At home of course it's a totally different story.